00:00:00.000 Started by upstream project "autotest-nightly" build number 4308 00:00:00.000 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3671 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.124 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.124 The recommended git tool is: git 00:00:00.125 using credential 00000000-0000-0000-0000-000000000002 00:00:00.132 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.151 Fetching changes from the remote Git repository 00:00:00.157 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.176 Using shallow fetch with depth 1 00:00:00.176 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.176 > git --version # timeout=10 00:00:00.217 > git --version # 'git version 2.39.2' 00:00:00.217 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.247 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.247 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:05.118 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:05.128 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:05.138 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:05.138 > git config core.sparsecheckout # timeout=10 00:00:05.147 > git read-tree -mu HEAD # timeout=10 00:00:05.162 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:05.183 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:05.183 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:05.262 [Pipeline] Start of Pipeline 00:00:05.272 [Pipeline] library 00:00:05.274 Loading library shm_lib@master 00:00:05.274 Library shm_lib@master is cached. Copying from home. 00:00:05.290 [Pipeline] node 00:00:05.313 Running on VM-host-SM38 in /var/jenkins/workspace/nvme-vg-autotest 00:00:05.315 [Pipeline] { 00:00:05.326 [Pipeline] catchError 00:00:05.328 [Pipeline] { 00:00:05.339 [Pipeline] wrap 00:00:05.345 [Pipeline] { 00:00:05.351 [Pipeline] stage 00:00:05.352 [Pipeline] { (Prologue) 00:00:05.375 [Pipeline] echo 00:00:05.377 Node: VM-host-SM38 00:00:05.383 [Pipeline] cleanWs 00:00:05.394 [WS-CLEANUP] Deleting project workspace... 00:00:05.394 [WS-CLEANUP] Deferred wipeout is used... 00:00:05.401 [WS-CLEANUP] done 00:00:05.657 [Pipeline] setCustomBuildProperty 00:00:05.735 [Pipeline] httpRequest 00:00:06.554 [Pipeline] echo 00:00:06.555 Sorcerer 10.211.164.20 is alive 00:00:06.570 [Pipeline] retry 00:00:06.572 [Pipeline] { 00:00:06.585 [Pipeline] httpRequest 00:00:06.591 HttpMethod: GET 00:00:06.592 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:06.592 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:06.598 Response Code: HTTP/1.1 200 OK 00:00:06.599 Success: Status code 200 is in the accepted range: 200,404 00:00:06.599 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:09.415 [Pipeline] } 00:00:09.433 [Pipeline] // retry 00:00:09.441 [Pipeline] sh 00:00:09.728 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:09.745 [Pipeline] httpRequest 00:00:10.617 [Pipeline] echo 00:00:10.619 Sorcerer 10.211.164.20 is alive 00:00:10.627 [Pipeline] retry 00:00:10.629 [Pipeline] { 00:00:10.641 [Pipeline] httpRequest 00:00:10.646 HttpMethod: GET 00:00:10.646 URL: http://10.211.164.20/packages/spdk_2f2acf4eb25cee406c156120cee22721275ca7fd.tar.gz 00:00:10.647 Sending request to url: http://10.211.164.20/packages/spdk_2f2acf4eb25cee406c156120cee22721275ca7fd.tar.gz 00:00:10.688 Response Code: HTTP/1.1 200 OK 00:00:10.689 Success: Status code 200 is in the accepted range: 200,404 00:00:10.689 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/spdk_2f2acf4eb25cee406c156120cee22721275ca7fd.tar.gz 00:02:11.190 [Pipeline] } 00:02:11.206 [Pipeline] // retry 00:02:11.212 [Pipeline] sh 00:02:11.490 + tar --no-same-owner -xf spdk_2f2acf4eb25cee406c156120cee22721275ca7fd.tar.gz 00:02:14.044 [Pipeline] sh 00:02:14.321 + git -C spdk log --oneline -n5 00:02:14.322 2f2acf4eb doc: move nvmf_tracing.md to tracing.md 00:02:14.322 5592070b3 doc: update nvmf_tracing.md 00:02:14.322 5ca6db5da nvme_spec: Add SPDK_NVME_IO_FLAGS_PRCHK_MASK 00:02:14.322 f7ce15267 bdev: Insert or overwrite metadata using bounce/accel buffer if NVMe PRACT is set 00:02:14.322 aa58c9e0b dif: Add spdk_dif_pi_format_get_size() to use for NVMe PRACT 00:02:14.339 [Pipeline] writeFile 00:02:14.354 [Pipeline] sh 00:02:14.632 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:02:14.642 [Pipeline] sh 00:02:14.919 + cat autorun-spdk.conf 00:02:14.919 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:14.919 SPDK_TEST_NVME=1 00:02:14.919 SPDK_TEST_FTL=1 00:02:14.919 SPDK_TEST_ISAL=1 00:02:14.919 SPDK_RUN_ASAN=1 00:02:14.919 SPDK_RUN_UBSAN=1 00:02:14.919 SPDK_TEST_XNVME=1 00:02:14.919 SPDK_TEST_NVME_FDP=1 00:02:14.919 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:14.926 RUN_NIGHTLY=1 00:02:14.928 [Pipeline] } 00:02:14.939 [Pipeline] // stage 00:02:14.954 [Pipeline] stage 00:02:14.955 [Pipeline] { (Run VM) 00:02:14.967 [Pipeline] sh 00:02:15.242 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:02:15.242 + echo 'Start stage prepare_nvme.sh' 00:02:15.242 Start stage prepare_nvme.sh 00:02:15.242 + [[ -n 5 ]] 00:02:15.242 + disk_prefix=ex5 00:02:15.242 + [[ -n /var/jenkins/workspace/nvme-vg-autotest ]] 00:02:15.242 + [[ -e /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf ]] 00:02:15.242 + source /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf 00:02:15.242 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:15.242 ++ SPDK_TEST_NVME=1 00:02:15.242 ++ SPDK_TEST_FTL=1 00:02:15.242 ++ SPDK_TEST_ISAL=1 00:02:15.242 ++ SPDK_RUN_ASAN=1 00:02:15.242 ++ SPDK_RUN_UBSAN=1 00:02:15.242 ++ SPDK_TEST_XNVME=1 00:02:15.242 ++ SPDK_TEST_NVME_FDP=1 00:02:15.242 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:15.242 ++ RUN_NIGHTLY=1 00:02:15.242 + cd /var/jenkins/workspace/nvme-vg-autotest 00:02:15.242 + nvme_files=() 00:02:15.242 + declare -A nvme_files 00:02:15.242 + backend_dir=/var/lib/libvirt/images/backends 00:02:15.242 + nvme_files['nvme.img']=5G 00:02:15.242 + nvme_files['nvme-cmb.img']=5G 00:02:15.242 + nvme_files['nvme-multi0.img']=4G 00:02:15.242 + nvme_files['nvme-multi1.img']=4G 00:02:15.242 + nvme_files['nvme-multi2.img']=4G 00:02:15.242 + nvme_files['nvme-openstack.img']=8G 00:02:15.242 + nvme_files['nvme-zns.img']=5G 00:02:15.242 + (( SPDK_TEST_NVME_PMR == 1 )) 00:02:15.242 + (( SPDK_TEST_FTL == 1 )) 00:02:15.243 + nvme_files["nvme-ftl.img"]=6G 00:02:15.243 + (( SPDK_TEST_NVME_FDP == 1 )) 00:02:15.243 + nvme_files["nvme-fdp.img"]=1G 00:02:15.243 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:02:15.243 + for nvme in "${!nvme_files[@]}" 00:02:15.243 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi2.img -s 4G 00:02:15.243 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:02:15.243 + for nvme in "${!nvme_files[@]}" 00:02:15.243 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-ftl.img -s 6G 00:02:15.816 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-ftl.img', fmt=raw size=6442450944 preallocation=falloc 00:02:15.816 + for nvme in "${!nvme_files[@]}" 00:02:15.816 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-cmb.img -s 5G 00:02:15.816 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:02:15.816 + for nvme in "${!nvme_files[@]}" 00:02:15.816 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-openstack.img -s 8G 00:02:16.076 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:02:16.076 + for nvme in "${!nvme_files[@]}" 00:02:16.076 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-zns.img -s 5G 00:02:16.648 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:02:16.648 + for nvme in "${!nvme_files[@]}" 00:02:16.648 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi1.img -s 4G 00:02:16.907 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:02:16.907 + for nvme in "${!nvme_files[@]}" 00:02:16.907 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi0.img -s 4G 00:02:16.907 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:02:16.907 + for nvme in "${!nvme_files[@]}" 00:02:16.907 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-fdp.img -s 1G 00:02:16.907 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-fdp.img', fmt=raw size=1073741824 preallocation=falloc 00:02:16.907 + for nvme in "${!nvme_files[@]}" 00:02:16.907 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme.img -s 5G 00:02:17.476 Formatting '/var/lib/libvirt/images/backends/ex5-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:02:17.476 ++ sudo grep -rl ex5-nvme.img /etc/libvirt/qemu 00:02:17.476 + echo 'End stage prepare_nvme.sh' 00:02:17.476 End stage prepare_nvme.sh 00:02:17.490 [Pipeline] sh 00:02:17.777 + DISTRO=fedora39 00:02:17.778 + CPUS=10 00:02:17.778 + RAM=12288 00:02:17.778 + jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:02:17.778 Setup: -n 10 -s 12288 -x -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex5-nvme-ftl.img,nvme,,,,,true -b /var/lib/libvirt/images/backends/ex5-nvme.img -b /var/lib/libvirt/images/backends/ex5-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex5-nvme-multi1.img:/var/lib/libvirt/images/backends/ex5-nvme-multi2.img -b /var/lib/libvirt/images/backends/ex5-nvme-fdp.img,nvme,,,,,,on -H -a -v -f fedora39 00:02:17.778 00:02:17.778 DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant 00:02:17.778 SPDK_DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk 00:02:17.778 VAGRANT_TARGET=/var/jenkins/workspace/nvme-vg-autotest 00:02:17.778 HELP=0 00:02:17.778 DRY_RUN=0 00:02:17.778 NVME_FILE=/var/lib/libvirt/images/backends/ex5-nvme-ftl.img,/var/lib/libvirt/images/backends/ex5-nvme.img,/var/lib/libvirt/images/backends/ex5-nvme-multi0.img,/var/lib/libvirt/images/backends/ex5-nvme-fdp.img, 00:02:17.778 NVME_DISKS_TYPE=nvme,nvme,nvme,nvme, 00:02:17.778 NVME_AUTO_CREATE=0 00:02:17.778 NVME_DISKS_NAMESPACES=,,/var/lib/libvirt/images/backends/ex5-nvme-multi1.img:/var/lib/libvirt/images/backends/ex5-nvme-multi2.img,, 00:02:17.778 NVME_CMB=,,,, 00:02:17.778 NVME_PMR=,,,, 00:02:17.778 NVME_ZNS=,,,, 00:02:17.778 NVME_MS=true,,,, 00:02:17.778 NVME_FDP=,,,on, 00:02:17.778 SPDK_VAGRANT_DISTRO=fedora39 00:02:17.778 SPDK_VAGRANT_VMCPU=10 00:02:17.778 SPDK_VAGRANT_VMRAM=12288 00:02:17.778 SPDK_VAGRANT_PROVIDER=libvirt 00:02:17.778 SPDK_VAGRANT_HTTP_PROXY= 00:02:17.778 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:02:17.778 SPDK_OPENSTACK_NETWORK=0 00:02:17.778 VAGRANT_PACKAGE_BOX=0 00:02:17.778 VAGRANTFILE=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:02:17.778 FORCE_DISTRO=true 00:02:17.778 VAGRANT_BOX_VERSION= 00:02:17.778 EXTRA_VAGRANTFILES= 00:02:17.778 NIC_MODEL=e1000 00:02:17.778 00:02:17.778 mkdir: created directory '/var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt' 00:02:17.778 /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvme-vg-autotest 00:02:19.694 Bringing machine 'default' up with 'libvirt' provider... 00:02:20.266 ==> default: Creating image (snapshot of base box volume). 00:02:20.528 ==> default: Creating domain with the following settings... 00:02:20.528 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1732681156_006bb8ed49e11892c4fd 00:02:20.528 ==> default: -- Domain type: kvm 00:02:20.528 ==> default: -- Cpus: 10 00:02:20.528 ==> default: -- Feature: acpi 00:02:20.528 ==> default: -- Feature: apic 00:02:20.528 ==> default: -- Feature: pae 00:02:20.528 ==> default: -- Memory: 12288M 00:02:20.528 ==> default: -- Memory Backing: hugepages: 00:02:20.528 ==> default: -- Management MAC: 00:02:20.528 ==> default: -- Loader: 00:02:20.528 ==> default: -- Nvram: 00:02:20.528 ==> default: -- Base box: spdk/fedora39 00:02:20.528 ==> default: -- Storage pool: default 00:02:20.528 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1732681156_006bb8ed49e11892c4fd.img (20G) 00:02:20.528 ==> default: -- Volume Cache: default 00:02:20.528 ==> default: -- Kernel: 00:02:20.529 ==> default: -- Initrd: 00:02:20.529 ==> default: -- Graphics Type: vnc 00:02:20.529 ==> default: -- Graphics Port: -1 00:02:20.529 ==> default: -- Graphics IP: 127.0.0.1 00:02:20.529 ==> default: -- Graphics Password: Not defined 00:02:20.529 ==> default: -- Video Type: cirrus 00:02:20.529 ==> default: -- Video VRAM: 9216 00:02:20.529 ==> default: -- Sound Type: 00:02:20.529 ==> default: -- Keymap: en-us 00:02:20.529 ==> default: -- TPM Path: 00:02:20.529 ==> default: -- INPUT: type=mouse, bus=ps2 00:02:20.529 ==> default: -- Command line args: 00:02:20.529 ==> default: -> value=-device, 00:02:20.529 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:02:20.529 ==> default: -> value=-drive, 00:02:20.529 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-ftl.img,if=none,id=nvme-0-drive0, 00:02:20.529 ==> default: -> value=-device, 00:02:20.529 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096,ms=64, 00:02:20.529 ==> default: -> value=-device, 00:02:20.529 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:02:20.529 ==> default: -> value=-drive, 00:02:20.529 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme.img,if=none,id=nvme-1-drive0, 00:02:20.529 ==> default: -> value=-device, 00:02:20.529 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:20.529 ==> default: -> value=-device, 00:02:20.529 ==> default: -> value=nvme,id=nvme-2,serial=12342,addr=0x12, 00:02:20.529 ==> default: -> value=-drive, 00:02:20.529 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-multi0.img,if=none,id=nvme-2-drive0, 00:02:20.529 ==> default: -> value=-device, 00:02:20.529 ==> default: -> value=nvme-ns,drive=nvme-2-drive0,bus=nvme-2,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:20.529 ==> default: -> value=-drive, 00:02:20.529 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-multi1.img,if=none,id=nvme-2-drive1, 00:02:20.529 ==> default: -> value=-device, 00:02:20.529 ==> default: -> value=nvme-ns,drive=nvme-2-drive1,bus=nvme-2,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:20.529 ==> default: -> value=-drive, 00:02:20.529 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-multi2.img,if=none,id=nvme-2-drive2, 00:02:20.529 ==> default: -> value=-device, 00:02:20.529 ==> default: -> value=nvme-ns,drive=nvme-2-drive2,bus=nvme-2,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:20.529 ==> default: -> value=-device, 00:02:20.529 ==> default: -> value=nvme-subsys,id=fdp-subsys3,fdp=on,fdp.runs=96M,fdp.nrg=2,fdp.nruh=8, 00:02:20.529 ==> default: -> value=-device, 00:02:20.529 ==> default: -> value=nvme,id=nvme-3,serial=12343,addr=0x13,subsys=fdp-subsys3, 00:02:20.529 ==> default: -> value=-drive, 00:02:20.529 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-fdp.img,if=none,id=nvme-3-drive0, 00:02:20.529 ==> default: -> value=-device, 00:02:20.529 ==> default: -> value=nvme-ns,drive=nvme-3-drive0,bus=nvme-3,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:20.790 ==> default: Creating shared folders metadata... 00:02:20.790 ==> default: Starting domain. 00:02:22.707 ==> default: Waiting for domain to get an IP address... 00:02:40.862 ==> default: Waiting for SSH to become available... 00:02:41.804 ==> default: Configuring and enabling network interfaces... 00:02:47.096 default: SSH address: 192.168.121.171:22 00:02:47.096 default: SSH username: vagrant 00:02:47.096 default: SSH auth method: private key 00:02:48.484 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:02:56.696 ==> default: Mounting SSHFS shared folder... 00:02:58.611 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:02:58.611 ==> default: Checking Mount.. 00:02:59.555 ==> default: Folder Successfully Mounted! 00:02:59.555 00:02:59.555 SUCCESS! 00:02:59.555 00:02:59.555 cd to /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:02:59.555 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:02:59.555 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:02:59.555 00:02:59.566 [Pipeline] } 00:02:59.584 [Pipeline] // stage 00:02:59.594 [Pipeline] dir 00:02:59.595 Running in /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt 00:02:59.596 [Pipeline] { 00:02:59.612 [Pipeline] catchError 00:02:59.614 [Pipeline] { 00:02:59.629 [Pipeline] sh 00:02:59.951 + vagrant ssh-config --host vagrant 00:02:59.951 + sed -ne '/^Host/,$p' 00:02:59.951 + tee ssh_conf 00:03:02.507 Host vagrant 00:03:02.507 HostName 192.168.121.171 00:03:02.507 User vagrant 00:03:02.507 Port 22 00:03:02.507 UserKnownHostsFile /dev/null 00:03:02.507 StrictHostKeyChecking no 00:03:02.507 PasswordAuthentication no 00:03:02.507 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:03:02.507 IdentitiesOnly yes 00:03:02.507 LogLevel FATAL 00:03:02.507 ForwardAgent yes 00:03:02.507 ForwardX11 yes 00:03:02.507 00:03:02.521 [Pipeline] withEnv 00:03:02.524 [Pipeline] { 00:03:02.536 [Pipeline] sh 00:03:02.819 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant '#!/bin/bash 00:03:02.819 source /etc/os-release 00:03:02.819 [[ -e /image.version ]] && img=$(< /image.version) 00:03:02.819 # Minimal, systemd-like check. 00:03:02.819 if [[ -e /.dockerenv ]]; then 00:03:02.819 # Clear garbage from the node'\''s name: 00:03:02.819 # agt-er_autotest_547-896 -> autotest_547-896 00:03:02.819 # $HOSTNAME is the actual container id 00:03:02.819 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:03:02.819 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:03:02.819 # We can assume this is a mount from a host where container is running, 00:03:02.819 # so fetch its hostname to easily identify the target swarm worker. 00:03:02.819 container="$(< /etc/hostname) ($agent)" 00:03:02.819 else 00:03:02.819 # Fallback 00:03:02.819 container=$agent 00:03:02.819 fi 00:03:02.819 fi 00:03:02.819 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:03:02.819 ' 00:03:03.094 [Pipeline] } 00:03:03.111 [Pipeline] // withEnv 00:03:03.119 [Pipeline] setCustomBuildProperty 00:03:03.134 [Pipeline] stage 00:03:03.137 [Pipeline] { (Tests) 00:03:03.154 [Pipeline] sh 00:03:03.439 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:03:03.717 [Pipeline] sh 00:03:04.002 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:03:04.282 [Pipeline] timeout 00:03:04.282 Timeout set to expire in 50 min 00:03:04.284 [Pipeline] { 00:03:04.299 [Pipeline] sh 00:03:04.584 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'git -C spdk_repo/spdk reset --hard' 00:03:05.155 HEAD is now at 2f2acf4eb doc: move nvmf_tracing.md to tracing.md 00:03:05.168 [Pipeline] sh 00:03:05.452 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'sudo chown vagrant:vagrant spdk_repo' 00:03:05.730 [Pipeline] sh 00:03:06.016 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:03:06.294 [Pipeline] sh 00:03:06.593 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'JOB_BASE_NAME=nvme-vg-autotest ./autoruner.sh spdk_repo' 00:03:06.593 ++ readlink -f spdk_repo 00:03:06.853 + DIR_ROOT=/home/vagrant/spdk_repo 00:03:06.853 + [[ -n /home/vagrant/spdk_repo ]] 00:03:06.853 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:03:06.853 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:03:06.853 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:03:06.853 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:03:06.853 + [[ -d /home/vagrant/spdk_repo/output ]] 00:03:06.853 + [[ nvme-vg-autotest == pkgdep-* ]] 00:03:06.853 + cd /home/vagrant/spdk_repo 00:03:06.853 + source /etc/os-release 00:03:06.853 ++ NAME='Fedora Linux' 00:03:06.853 ++ VERSION='39 (Cloud Edition)' 00:03:06.853 ++ ID=fedora 00:03:06.853 ++ VERSION_ID=39 00:03:06.853 ++ VERSION_CODENAME= 00:03:06.853 ++ PLATFORM_ID=platform:f39 00:03:06.853 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:03:06.853 ++ ANSI_COLOR='0;38;2;60;110;180' 00:03:06.853 ++ LOGO=fedora-logo-icon 00:03:06.853 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:03:06.853 ++ HOME_URL=https://fedoraproject.org/ 00:03:06.853 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:03:06.853 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:03:06.853 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:03:06.853 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:03:06.853 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:03:06.853 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:03:06.853 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:03:06.853 ++ SUPPORT_END=2024-11-12 00:03:06.853 ++ VARIANT='Cloud Edition' 00:03:06.853 ++ VARIANT_ID=cloud 00:03:06.853 + uname -a 00:03:06.853 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:03:06.853 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:03:07.115 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:07.376 Hugepages 00:03:07.376 node hugesize free / total 00:03:07.376 node0 1048576kB 0 / 0 00:03:07.376 node0 2048kB 0 / 0 00:03:07.376 00:03:07.376 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:07.376 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:03:07.376 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:03:07.376 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:03:07.638 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:03:07.638 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:03:07.638 + rm -f /tmp/spdk-ld-path 00:03:07.638 + source autorun-spdk.conf 00:03:07.638 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:03:07.638 ++ SPDK_TEST_NVME=1 00:03:07.638 ++ SPDK_TEST_FTL=1 00:03:07.638 ++ SPDK_TEST_ISAL=1 00:03:07.638 ++ SPDK_RUN_ASAN=1 00:03:07.638 ++ SPDK_RUN_UBSAN=1 00:03:07.638 ++ SPDK_TEST_XNVME=1 00:03:07.638 ++ SPDK_TEST_NVME_FDP=1 00:03:07.638 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:03:07.638 ++ RUN_NIGHTLY=1 00:03:07.638 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:03:07.638 + [[ -n '' ]] 00:03:07.638 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:03:07.638 + for M in /var/spdk/build-*-manifest.txt 00:03:07.638 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:03:07.638 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:03:07.638 + for M in /var/spdk/build-*-manifest.txt 00:03:07.638 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:03:07.638 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:03:07.638 + for M in /var/spdk/build-*-manifest.txt 00:03:07.638 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:03:07.638 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:03:07.638 ++ uname 00:03:07.638 + [[ Linux == \L\i\n\u\x ]] 00:03:07.638 + sudo dmesg -T 00:03:07.638 + sudo dmesg --clear 00:03:07.638 + dmesg_pid=5037 00:03:07.638 + [[ Fedora Linux == FreeBSD ]] 00:03:07.638 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:03:07.638 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:03:07.638 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:03:07.638 + [[ -x /usr/src/fio-static/fio ]] 00:03:07.638 + sudo dmesg -Tw 00:03:07.638 + export FIO_BIN=/usr/src/fio-static/fio 00:03:07.638 + FIO_BIN=/usr/src/fio-static/fio 00:03:07.639 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:03:07.639 + [[ ! -v VFIO_QEMU_BIN ]] 00:03:07.639 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:03:07.639 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:03:07.639 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:03:07.639 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:03:07.639 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:03:07.639 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:03:07.639 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:03:07.639 04:20:04 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:03:07.639 04:20:04 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:03:07.639 04:20:04 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:03:07.639 04:20:04 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_TEST_NVME=1 00:03:07.639 04:20:04 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_TEST_FTL=1 00:03:07.639 04:20:04 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_ISAL=1 00:03:07.639 04:20:04 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_RUN_ASAN=1 00:03:07.639 04:20:04 -- spdk_repo/autorun-spdk.conf@6 -- $ SPDK_RUN_UBSAN=1 00:03:07.639 04:20:04 -- spdk_repo/autorun-spdk.conf@7 -- $ SPDK_TEST_XNVME=1 00:03:07.639 04:20:04 -- spdk_repo/autorun-spdk.conf@8 -- $ SPDK_TEST_NVME_FDP=1 00:03:07.639 04:20:04 -- spdk_repo/autorun-spdk.conf@9 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:03:07.639 04:20:04 -- spdk_repo/autorun-spdk.conf@10 -- $ RUN_NIGHTLY=1 00:03:07.639 04:20:04 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:03:07.639 04:20:04 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:03:07.901 04:20:04 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:03:07.901 04:20:04 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:07.901 04:20:04 -- scripts/common.sh@15 -- $ shopt -s extglob 00:03:07.901 04:20:04 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:03:07.901 04:20:04 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:07.901 04:20:04 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:07.901 04:20:04 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:07.901 04:20:04 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:07.901 04:20:04 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:07.901 04:20:04 -- paths/export.sh@5 -- $ export PATH 00:03:07.901 04:20:04 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:07.901 04:20:04 -- common/autobuild_common.sh@492 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:03:07.901 04:20:04 -- common/autobuild_common.sh@493 -- $ date +%s 00:03:07.901 04:20:04 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1732681204.XXXXXX 00:03:07.901 04:20:04 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1732681204.VNCXKr 00:03:07.901 04:20:04 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:03:07.901 04:20:04 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:03:07.901 04:20:04 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:03:07.901 04:20:04 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:03:07.901 04:20:04 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:03:07.901 04:20:04 -- common/autobuild_common.sh@509 -- $ get_config_params 00:03:07.901 04:20:04 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:03:07.901 04:20:04 -- common/autotest_common.sh@10 -- $ set +x 00:03:07.901 04:20:04 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme' 00:03:07.901 04:20:04 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:03:07.901 04:20:04 -- pm/common@17 -- $ local monitor 00:03:07.901 04:20:04 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:07.901 04:20:04 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:07.901 04:20:04 -- pm/common@25 -- $ sleep 1 00:03:07.901 04:20:04 -- pm/common@21 -- $ date +%s 00:03:07.901 04:20:04 -- pm/common@21 -- $ date +%s 00:03:07.901 04:20:04 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732681204 00:03:07.901 04:20:04 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732681204 00:03:07.901 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732681204_collect-cpu-load.pm.log 00:03:07.901 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732681204_collect-vmstat.pm.log 00:03:08.846 04:20:05 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:03:08.846 04:20:05 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:03:08.846 04:20:05 -- spdk/autobuild.sh@12 -- $ umask 022 00:03:08.846 04:20:05 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:03:08.846 04:20:05 -- spdk/autobuild.sh@16 -- $ date -u 00:03:08.846 Wed Nov 27 04:20:05 AM UTC 2024 00:03:08.846 04:20:05 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:03:08.846 v25.01-pre-271-g2f2acf4eb 00:03:08.846 04:20:05 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:03:08.846 04:20:05 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:03:08.846 04:20:05 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:03:08.846 04:20:05 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:03:08.846 04:20:05 -- common/autotest_common.sh@10 -- $ set +x 00:03:08.846 ************************************ 00:03:08.846 START TEST asan 00:03:08.846 ************************************ 00:03:08.846 using asan 00:03:08.846 04:20:05 asan -- common/autotest_common.sh@1129 -- $ echo 'using asan' 00:03:08.846 00:03:08.846 real 0m0.000s 00:03:08.846 user 0m0.000s 00:03:08.846 sys 0m0.000s 00:03:08.846 04:20:05 asan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:03:08.846 ************************************ 00:03:08.846 END TEST asan 00:03:08.846 ************************************ 00:03:08.846 04:20:05 asan -- common/autotest_common.sh@10 -- $ set +x 00:03:08.846 04:20:05 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:03:08.846 04:20:05 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:03:08.846 04:20:05 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:03:08.846 04:20:05 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:03:08.846 04:20:05 -- common/autotest_common.sh@10 -- $ set +x 00:03:08.846 ************************************ 00:03:08.846 START TEST ubsan 00:03:08.846 ************************************ 00:03:08.846 using ubsan 00:03:08.846 04:20:05 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:03:08.846 00:03:08.846 real 0m0.000s 00:03:08.846 user 0m0.000s 00:03:08.846 sys 0m0.000s 00:03:08.846 04:20:05 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:03:08.846 ************************************ 00:03:08.846 END TEST ubsan 00:03:08.846 ************************************ 00:03:08.846 04:20:05 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:03:09.107 04:20:05 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:03:09.107 04:20:05 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:03:09.107 04:20:05 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:03:09.107 04:20:05 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:03:09.107 04:20:05 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:03:09.107 04:20:05 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:03:09.107 04:20:05 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:03:09.107 04:20:05 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:03:09.107 04:20:05 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme --with-shared 00:03:09.107 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:03:09.107 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:03:09.367 Using 'verbs' RDMA provider 00:03:22.550 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:03:32.574 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:03:32.574 Creating mk/config.mk...done. 00:03:32.574 Creating mk/cc.flags.mk...done. 00:03:32.574 Type 'make' to build. 00:03:32.574 04:20:28 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:03:32.574 04:20:28 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:03:32.574 04:20:28 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:03:32.574 04:20:28 -- common/autotest_common.sh@10 -- $ set +x 00:03:32.574 ************************************ 00:03:32.574 START TEST make 00:03:32.574 ************************************ 00:03:32.574 04:20:28 make -- common/autotest_common.sh@1129 -- $ make -j10 00:03:32.574 (cd /home/vagrant/spdk_repo/spdk/xnvme && \ 00:03:32.574 export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/lib/pkgconfig:/usr/lib64/pkgconfig && \ 00:03:32.574 meson setup builddir \ 00:03:32.574 -Dwith-libaio=enabled \ 00:03:32.574 -Dwith-liburing=enabled \ 00:03:32.574 -Dwith-libvfn=disabled \ 00:03:32.574 -Dwith-spdk=disabled \ 00:03:32.574 -Dexamples=false \ 00:03:32.574 -Dtests=false \ 00:03:32.574 -Dtools=false && \ 00:03:32.574 meson compile -C builddir && \ 00:03:32.574 cd -) 00:03:32.574 make[1]: Nothing to be done for 'all'. 00:03:33.948 The Meson build system 00:03:33.948 Version: 1.5.0 00:03:33.948 Source dir: /home/vagrant/spdk_repo/spdk/xnvme 00:03:33.948 Build dir: /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:03:33.948 Build type: native build 00:03:33.948 Project name: xnvme 00:03:33.948 Project version: 0.7.5 00:03:33.948 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:03:33.948 C linker for the host machine: cc ld.bfd 2.40-14 00:03:33.948 Host machine cpu family: x86_64 00:03:33.948 Host machine cpu: x86_64 00:03:33.948 Message: host_machine.system: linux 00:03:33.948 Compiler for C supports arguments -Wno-missing-braces: YES 00:03:33.948 Compiler for C supports arguments -Wno-cast-function-type: YES 00:03:33.948 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:03:33.948 Run-time dependency threads found: YES 00:03:33.948 Has header "setupapi.h" : NO 00:03:33.948 Has header "linux/blkzoned.h" : YES 00:03:33.948 Has header "linux/blkzoned.h" : YES (cached) 00:03:33.948 Has header "libaio.h" : YES 00:03:33.948 Library aio found: YES 00:03:33.948 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:03:33.948 Run-time dependency liburing found: YES 2.2 00:03:33.948 Dependency libvfn skipped: feature with-libvfn disabled 00:03:33.948 Found CMake: /usr/bin/cmake (3.27.7) 00:03:33.948 Run-time dependency libisal found: NO (tried pkgconfig and cmake) 00:03:33.948 Subproject spdk : skipped: feature with-spdk disabled 00:03:33.948 Run-time dependency appleframeworks found: NO (tried framework) 00:03:33.948 Run-time dependency appleframeworks found: NO (tried framework) 00:03:33.949 Library rt found: YES 00:03:33.949 Checking for function "clock_gettime" with dependency -lrt: YES 00:03:33.949 Configuring xnvme_config.h using configuration 00:03:33.949 Configuring xnvme.spec using configuration 00:03:33.949 Run-time dependency bash-completion found: YES 2.11 00:03:33.949 Message: Bash-completions: /usr/share/bash-completion/completions 00:03:33.949 Program cp found: YES (/usr/bin/cp) 00:03:33.949 Build targets in project: 3 00:03:33.949 00:03:33.949 xnvme 0.7.5 00:03:33.949 00:03:33.949 Subprojects 00:03:33.949 spdk : NO Feature 'with-spdk' disabled 00:03:33.949 00:03:33.949 User defined options 00:03:33.949 examples : false 00:03:33.949 tests : false 00:03:33.949 tools : false 00:03:33.949 with-libaio : enabled 00:03:33.949 with-liburing: enabled 00:03:33.949 with-libvfn : disabled 00:03:33.949 with-spdk : disabled 00:03:33.949 00:03:33.949 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:34.515 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/xnvme/builddir' 00:03:34.515 [1/76] Generating toolbox/xnvme-driver-script with a custom command 00:03:34.515 [2/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_mem_posix.c.o 00:03:34.515 [3/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_async.c.o 00:03:34.515 [4/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd.c.o 00:03:34.515 [5/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_admin_shim.c.o 00:03:34.515 [6/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_nil.c.o 00:03:34.515 [7/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_dev.c.o 00:03:34.515 [8/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_posix.c.o 00:03:34.515 [9/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_nvme.c.o 00:03:34.515 [10/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_emu.c.o 00:03:34.515 [11/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_adm.c.o 00:03:34.515 [12/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_sync_psync.c.o 00:03:34.515 [13/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux.c.o 00:03:34.515 [14/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos.c.o 00:03:34.515 [15/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be.c.o 00:03:34.515 [16/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_admin.c.o 00:03:34.515 [17/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_hugepage.c.o 00:03:34.515 [18/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_thrpool.c.o 00:03:34.515 [19/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_dev.c.o 00:03:34.515 [20/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_libaio.c.o 00:03:34.515 [21/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_sync.c.o 00:03:34.774 [22/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk.c.o 00:03:34.774 [23/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_nosys.c.o 00:03:34.774 [24/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_ucmd.c.o 00:03:34.774 [25/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk.c.o 00:03:34.774 [26/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_admin.c.o 00:03:34.774 [27/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_dev.c.o 00:03:34.774 [28/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_nvme.c.o 00:03:34.774 [29/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_dev.c.o 00:03:34.774 [30/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_async.c.o 00:03:34.774 [31/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_block.c.o 00:03:34.774 [32/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_dev.c.o 00:03:34.774 [33/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_sync.c.o 00:03:34.774 [34/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio.c.o 00:03:34.774 [35/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_mem.c.o 00:03:34.774 [36/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_sync.c.o 00:03:34.774 [37/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_admin.c.o 00:03:34.774 [38/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_admin.c.o 00:03:34.774 [39/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_liburing.c.o 00:03:34.774 [40/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_dev.c.o 00:03:34.774 [41/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_async.c.o 00:03:34.774 [42/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_sync.c.o 00:03:34.774 [43/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_mem.c.o 00:03:34.774 [44/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows.c.o 00:03:34.774 [45/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp_th.c.o 00:03:34.774 [46/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_block.c.o 00:03:34.774 [47/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp.c.o 00:03:34.774 [48/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_ioring.c.o 00:03:34.774 [49/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_dev.c.o 00:03:34.774 [50/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_fs.c.o 00:03:34.774 [51/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_mem.c.o 00:03:34.774 [52/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_nvme.c.o 00:03:34.774 [53/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf_entries.c.o 00:03:35.033 [54/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_file.c.o 00:03:35.033 [55/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf.c.o 00:03:35.033 [56/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_geo.c.o 00:03:35.033 [57/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ident.c.o 00:03:35.033 [58/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_req.c.o 00:03:35.033 [59/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cmd.c.o 00:03:35.033 [60/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_lba.c.o 00:03:35.033 [61/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_opts.c.o 00:03:35.033 [62/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_nvm.c.o 00:03:35.033 [63/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_kvs.c.o 00:03:35.033 [64/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_buf.c.o 00:03:35.033 [65/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_topology.c.o 00:03:35.033 [66/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ver.c.o 00:03:35.033 [67/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_queue.c.o 00:03:35.033 [68/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_crc.c.o 00:03:35.033 [69/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_dev.c.o 00:03:35.033 [70/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec_pp.c.o 00:03:35.291 [71/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_znd.c.o 00:03:35.291 [72/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_pi.c.o 00:03:35.291 [73/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cli.c.o 00:03:35.550 [74/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec.c.o 00:03:35.550 [75/76] Linking static target lib/libxnvme.a 00:03:35.550 [76/76] Linking target lib/libxnvme.so.0.7.5 00:03:35.550 INFO: autodetecting backend as ninja 00:03:35.550 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:03:35.550 /home/vagrant/spdk_repo/spdk/xnvmebuild 00:03:42.108 The Meson build system 00:03:42.108 Version: 1.5.0 00:03:42.108 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:03:42.108 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:03:42.108 Build type: native build 00:03:42.108 Program cat found: YES (/usr/bin/cat) 00:03:42.108 Project name: DPDK 00:03:42.108 Project version: 24.03.0 00:03:42.108 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:03:42.108 C linker for the host machine: cc ld.bfd 2.40-14 00:03:42.108 Host machine cpu family: x86_64 00:03:42.108 Host machine cpu: x86_64 00:03:42.108 Message: ## Building in Developer Mode ## 00:03:42.108 Program pkg-config found: YES (/usr/bin/pkg-config) 00:03:42.108 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:03:42.108 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:03:42.108 Program python3 found: YES (/usr/bin/python3) 00:03:42.108 Program cat found: YES (/usr/bin/cat) 00:03:42.108 Compiler for C supports arguments -march=native: YES 00:03:42.108 Checking for size of "void *" : 8 00:03:42.108 Checking for size of "void *" : 8 (cached) 00:03:42.108 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:03:42.108 Library m found: YES 00:03:42.108 Library numa found: YES 00:03:42.108 Has header "numaif.h" : YES 00:03:42.108 Library fdt found: NO 00:03:42.108 Library execinfo found: NO 00:03:42.108 Has header "execinfo.h" : YES 00:03:42.108 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:03:42.108 Run-time dependency libarchive found: NO (tried pkgconfig) 00:03:42.108 Run-time dependency libbsd found: NO (tried pkgconfig) 00:03:42.108 Run-time dependency jansson found: NO (tried pkgconfig) 00:03:42.108 Run-time dependency openssl found: YES 3.1.1 00:03:42.108 Run-time dependency libpcap found: YES 1.10.4 00:03:42.108 Has header "pcap.h" with dependency libpcap: YES 00:03:42.108 Compiler for C supports arguments -Wcast-qual: YES 00:03:42.108 Compiler for C supports arguments -Wdeprecated: YES 00:03:42.108 Compiler for C supports arguments -Wformat: YES 00:03:42.108 Compiler for C supports arguments -Wformat-nonliteral: NO 00:03:42.108 Compiler for C supports arguments -Wformat-security: NO 00:03:42.108 Compiler for C supports arguments -Wmissing-declarations: YES 00:03:42.108 Compiler for C supports arguments -Wmissing-prototypes: YES 00:03:42.108 Compiler for C supports arguments -Wnested-externs: YES 00:03:42.108 Compiler for C supports arguments -Wold-style-definition: YES 00:03:42.108 Compiler for C supports arguments -Wpointer-arith: YES 00:03:42.108 Compiler for C supports arguments -Wsign-compare: YES 00:03:42.108 Compiler for C supports arguments -Wstrict-prototypes: YES 00:03:42.108 Compiler for C supports arguments -Wundef: YES 00:03:42.108 Compiler for C supports arguments -Wwrite-strings: YES 00:03:42.108 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:03:42.108 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:03:42.108 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:03:42.108 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:03:42.108 Program objdump found: YES (/usr/bin/objdump) 00:03:42.108 Compiler for C supports arguments -mavx512f: YES 00:03:42.108 Checking if "AVX512 checking" compiles: YES 00:03:42.108 Fetching value of define "__SSE4_2__" : 1 00:03:42.108 Fetching value of define "__AES__" : 1 00:03:42.108 Fetching value of define "__AVX__" : 1 00:03:42.108 Fetching value of define "__AVX2__" : 1 00:03:42.108 Fetching value of define "__AVX512BW__" : 1 00:03:42.108 Fetching value of define "__AVX512CD__" : 1 00:03:42.108 Fetching value of define "__AVX512DQ__" : 1 00:03:42.108 Fetching value of define "__AVX512F__" : 1 00:03:42.108 Fetching value of define "__AVX512VL__" : 1 00:03:42.108 Fetching value of define "__PCLMUL__" : 1 00:03:42.108 Fetching value of define "__RDRND__" : 1 00:03:42.108 Fetching value of define "__RDSEED__" : 1 00:03:42.108 Fetching value of define "__VPCLMULQDQ__" : 1 00:03:42.108 Fetching value of define "__znver1__" : (undefined) 00:03:42.108 Fetching value of define "__znver2__" : (undefined) 00:03:42.108 Fetching value of define "__znver3__" : (undefined) 00:03:42.108 Fetching value of define "__znver4__" : (undefined) 00:03:42.108 Library asan found: YES 00:03:42.108 Compiler for C supports arguments -Wno-format-truncation: YES 00:03:42.108 Message: lib/log: Defining dependency "log" 00:03:42.108 Message: lib/kvargs: Defining dependency "kvargs" 00:03:42.108 Message: lib/telemetry: Defining dependency "telemetry" 00:03:42.108 Library rt found: YES 00:03:42.108 Checking for function "getentropy" : NO 00:03:42.108 Message: lib/eal: Defining dependency "eal" 00:03:42.108 Message: lib/ring: Defining dependency "ring" 00:03:42.108 Message: lib/rcu: Defining dependency "rcu" 00:03:42.108 Message: lib/mempool: Defining dependency "mempool" 00:03:42.108 Message: lib/mbuf: Defining dependency "mbuf" 00:03:42.108 Fetching value of define "__PCLMUL__" : 1 (cached) 00:03:42.108 Fetching value of define "__AVX512F__" : 1 (cached) 00:03:42.108 Fetching value of define "__AVX512BW__" : 1 (cached) 00:03:42.108 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:03:42.108 Fetching value of define "__AVX512VL__" : 1 (cached) 00:03:42.108 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:03:42.108 Compiler for C supports arguments -mpclmul: YES 00:03:42.108 Compiler for C supports arguments -maes: YES 00:03:42.108 Compiler for C supports arguments -mavx512f: YES (cached) 00:03:42.108 Compiler for C supports arguments -mavx512bw: YES 00:03:42.108 Compiler for C supports arguments -mavx512dq: YES 00:03:42.108 Compiler for C supports arguments -mavx512vl: YES 00:03:42.108 Compiler for C supports arguments -mvpclmulqdq: YES 00:03:42.108 Compiler for C supports arguments -mavx2: YES 00:03:42.108 Compiler for C supports arguments -mavx: YES 00:03:42.108 Message: lib/net: Defining dependency "net" 00:03:42.108 Message: lib/meter: Defining dependency "meter" 00:03:42.108 Message: lib/ethdev: Defining dependency "ethdev" 00:03:42.108 Message: lib/pci: Defining dependency "pci" 00:03:42.108 Message: lib/cmdline: Defining dependency "cmdline" 00:03:42.108 Message: lib/hash: Defining dependency "hash" 00:03:42.108 Message: lib/timer: Defining dependency "timer" 00:03:42.108 Message: lib/compressdev: Defining dependency "compressdev" 00:03:42.108 Message: lib/cryptodev: Defining dependency "cryptodev" 00:03:42.108 Message: lib/dmadev: Defining dependency "dmadev" 00:03:42.108 Compiler for C supports arguments -Wno-cast-qual: YES 00:03:42.108 Message: lib/power: Defining dependency "power" 00:03:42.108 Message: lib/reorder: Defining dependency "reorder" 00:03:42.108 Message: lib/security: Defining dependency "security" 00:03:42.108 Has header "linux/userfaultfd.h" : YES 00:03:42.108 Has header "linux/vduse.h" : YES 00:03:42.108 Message: lib/vhost: Defining dependency "vhost" 00:03:42.108 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:03:42.108 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:03:42.108 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:03:42.108 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:03:42.108 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:03:42.108 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:03:42.108 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:03:42.108 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:03:42.108 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:03:42.108 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:03:42.108 Program doxygen found: YES (/usr/local/bin/doxygen) 00:03:42.108 Configuring doxy-api-html.conf using configuration 00:03:42.108 Configuring doxy-api-man.conf using configuration 00:03:42.108 Program mandb found: YES (/usr/bin/mandb) 00:03:42.108 Program sphinx-build found: NO 00:03:42.108 Configuring rte_build_config.h using configuration 00:03:42.108 Message: 00:03:42.108 ================= 00:03:42.108 Applications Enabled 00:03:42.108 ================= 00:03:42.108 00:03:42.108 apps: 00:03:42.108 00:03:42.108 00:03:42.108 Message: 00:03:42.108 ================= 00:03:42.108 Libraries Enabled 00:03:42.108 ================= 00:03:42.108 00:03:42.108 libs: 00:03:42.108 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:03:42.108 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:03:42.108 cryptodev, dmadev, power, reorder, security, vhost, 00:03:42.108 00:03:42.108 Message: 00:03:42.109 =============== 00:03:42.109 Drivers Enabled 00:03:42.109 =============== 00:03:42.109 00:03:42.109 common: 00:03:42.109 00:03:42.109 bus: 00:03:42.109 pci, vdev, 00:03:42.109 mempool: 00:03:42.109 ring, 00:03:42.109 dma: 00:03:42.109 00:03:42.109 net: 00:03:42.109 00:03:42.109 crypto: 00:03:42.109 00:03:42.109 compress: 00:03:42.109 00:03:42.109 vdpa: 00:03:42.109 00:03:42.109 00:03:42.109 Message: 00:03:42.109 ================= 00:03:42.109 Content Skipped 00:03:42.109 ================= 00:03:42.109 00:03:42.109 apps: 00:03:42.109 dumpcap: explicitly disabled via build config 00:03:42.109 graph: explicitly disabled via build config 00:03:42.109 pdump: explicitly disabled via build config 00:03:42.109 proc-info: explicitly disabled via build config 00:03:42.109 test-acl: explicitly disabled via build config 00:03:42.109 test-bbdev: explicitly disabled via build config 00:03:42.109 test-cmdline: explicitly disabled via build config 00:03:42.109 test-compress-perf: explicitly disabled via build config 00:03:42.109 test-crypto-perf: explicitly disabled via build config 00:03:42.109 test-dma-perf: explicitly disabled via build config 00:03:42.109 test-eventdev: explicitly disabled via build config 00:03:42.109 test-fib: explicitly disabled via build config 00:03:42.109 test-flow-perf: explicitly disabled via build config 00:03:42.109 test-gpudev: explicitly disabled via build config 00:03:42.109 test-mldev: explicitly disabled via build config 00:03:42.109 test-pipeline: explicitly disabled via build config 00:03:42.109 test-pmd: explicitly disabled via build config 00:03:42.109 test-regex: explicitly disabled via build config 00:03:42.109 test-sad: explicitly disabled via build config 00:03:42.109 test-security-perf: explicitly disabled via build config 00:03:42.109 00:03:42.109 libs: 00:03:42.109 argparse: explicitly disabled via build config 00:03:42.109 metrics: explicitly disabled via build config 00:03:42.109 acl: explicitly disabled via build config 00:03:42.109 bbdev: explicitly disabled via build config 00:03:42.109 bitratestats: explicitly disabled via build config 00:03:42.109 bpf: explicitly disabled via build config 00:03:42.109 cfgfile: explicitly disabled via build config 00:03:42.109 distributor: explicitly disabled via build config 00:03:42.109 efd: explicitly disabled via build config 00:03:42.109 eventdev: explicitly disabled via build config 00:03:42.109 dispatcher: explicitly disabled via build config 00:03:42.109 gpudev: explicitly disabled via build config 00:03:42.109 gro: explicitly disabled via build config 00:03:42.109 gso: explicitly disabled via build config 00:03:42.109 ip_frag: explicitly disabled via build config 00:03:42.109 jobstats: explicitly disabled via build config 00:03:42.109 latencystats: explicitly disabled via build config 00:03:42.109 lpm: explicitly disabled via build config 00:03:42.109 member: explicitly disabled via build config 00:03:42.109 pcapng: explicitly disabled via build config 00:03:42.109 rawdev: explicitly disabled via build config 00:03:42.109 regexdev: explicitly disabled via build config 00:03:42.109 mldev: explicitly disabled via build config 00:03:42.109 rib: explicitly disabled via build config 00:03:42.109 sched: explicitly disabled via build config 00:03:42.109 stack: explicitly disabled via build config 00:03:42.109 ipsec: explicitly disabled via build config 00:03:42.109 pdcp: explicitly disabled via build config 00:03:42.109 fib: explicitly disabled via build config 00:03:42.109 port: explicitly disabled via build config 00:03:42.109 pdump: explicitly disabled via build config 00:03:42.109 table: explicitly disabled via build config 00:03:42.109 pipeline: explicitly disabled via build config 00:03:42.109 graph: explicitly disabled via build config 00:03:42.109 node: explicitly disabled via build config 00:03:42.109 00:03:42.109 drivers: 00:03:42.109 common/cpt: not in enabled drivers build config 00:03:42.109 common/dpaax: not in enabled drivers build config 00:03:42.109 common/iavf: not in enabled drivers build config 00:03:42.109 common/idpf: not in enabled drivers build config 00:03:42.109 common/ionic: not in enabled drivers build config 00:03:42.109 common/mvep: not in enabled drivers build config 00:03:42.109 common/octeontx: not in enabled drivers build config 00:03:42.109 bus/auxiliary: not in enabled drivers build config 00:03:42.109 bus/cdx: not in enabled drivers build config 00:03:42.109 bus/dpaa: not in enabled drivers build config 00:03:42.109 bus/fslmc: not in enabled drivers build config 00:03:42.109 bus/ifpga: not in enabled drivers build config 00:03:42.109 bus/platform: not in enabled drivers build config 00:03:42.109 bus/uacce: not in enabled drivers build config 00:03:42.109 bus/vmbus: not in enabled drivers build config 00:03:42.109 common/cnxk: not in enabled drivers build config 00:03:42.109 common/mlx5: not in enabled drivers build config 00:03:42.109 common/nfp: not in enabled drivers build config 00:03:42.109 common/nitrox: not in enabled drivers build config 00:03:42.109 common/qat: not in enabled drivers build config 00:03:42.109 common/sfc_efx: not in enabled drivers build config 00:03:42.109 mempool/bucket: not in enabled drivers build config 00:03:42.109 mempool/cnxk: not in enabled drivers build config 00:03:42.109 mempool/dpaa: not in enabled drivers build config 00:03:42.109 mempool/dpaa2: not in enabled drivers build config 00:03:42.109 mempool/octeontx: not in enabled drivers build config 00:03:42.109 mempool/stack: not in enabled drivers build config 00:03:42.109 dma/cnxk: not in enabled drivers build config 00:03:42.109 dma/dpaa: not in enabled drivers build config 00:03:42.109 dma/dpaa2: not in enabled drivers build config 00:03:42.109 dma/hisilicon: not in enabled drivers build config 00:03:42.109 dma/idxd: not in enabled drivers build config 00:03:42.109 dma/ioat: not in enabled drivers build config 00:03:42.109 dma/skeleton: not in enabled drivers build config 00:03:42.109 net/af_packet: not in enabled drivers build config 00:03:42.109 net/af_xdp: not in enabled drivers build config 00:03:42.109 net/ark: not in enabled drivers build config 00:03:42.109 net/atlantic: not in enabled drivers build config 00:03:42.109 net/avp: not in enabled drivers build config 00:03:42.109 net/axgbe: not in enabled drivers build config 00:03:42.109 net/bnx2x: not in enabled drivers build config 00:03:42.109 net/bnxt: not in enabled drivers build config 00:03:42.109 net/bonding: not in enabled drivers build config 00:03:42.109 net/cnxk: not in enabled drivers build config 00:03:42.109 net/cpfl: not in enabled drivers build config 00:03:42.109 net/cxgbe: not in enabled drivers build config 00:03:42.109 net/dpaa: not in enabled drivers build config 00:03:42.109 net/dpaa2: not in enabled drivers build config 00:03:42.109 net/e1000: not in enabled drivers build config 00:03:42.109 net/ena: not in enabled drivers build config 00:03:42.109 net/enetc: not in enabled drivers build config 00:03:42.109 net/enetfec: not in enabled drivers build config 00:03:42.109 net/enic: not in enabled drivers build config 00:03:42.109 net/failsafe: not in enabled drivers build config 00:03:42.109 net/fm10k: not in enabled drivers build config 00:03:42.109 net/gve: not in enabled drivers build config 00:03:42.109 net/hinic: not in enabled drivers build config 00:03:42.109 net/hns3: not in enabled drivers build config 00:03:42.109 net/i40e: not in enabled drivers build config 00:03:42.109 net/iavf: not in enabled drivers build config 00:03:42.109 net/ice: not in enabled drivers build config 00:03:42.109 net/idpf: not in enabled drivers build config 00:03:42.109 net/igc: not in enabled drivers build config 00:03:42.109 net/ionic: not in enabled drivers build config 00:03:42.109 net/ipn3ke: not in enabled drivers build config 00:03:42.109 net/ixgbe: not in enabled drivers build config 00:03:42.109 net/mana: not in enabled drivers build config 00:03:42.109 net/memif: not in enabled drivers build config 00:03:42.109 net/mlx4: not in enabled drivers build config 00:03:42.109 net/mlx5: not in enabled drivers build config 00:03:42.109 net/mvneta: not in enabled drivers build config 00:03:42.109 net/mvpp2: not in enabled drivers build config 00:03:42.109 net/netvsc: not in enabled drivers build config 00:03:42.109 net/nfb: not in enabled drivers build config 00:03:42.109 net/nfp: not in enabled drivers build config 00:03:42.109 net/ngbe: not in enabled drivers build config 00:03:42.109 net/null: not in enabled drivers build config 00:03:42.109 net/octeontx: not in enabled drivers build config 00:03:42.109 net/octeon_ep: not in enabled drivers build config 00:03:42.109 net/pcap: not in enabled drivers build config 00:03:42.109 net/pfe: not in enabled drivers build config 00:03:42.109 net/qede: not in enabled drivers build config 00:03:42.109 net/ring: not in enabled drivers build config 00:03:42.109 net/sfc: not in enabled drivers build config 00:03:42.109 net/softnic: not in enabled drivers build config 00:03:42.109 net/tap: not in enabled drivers build config 00:03:42.109 net/thunderx: not in enabled drivers build config 00:03:42.109 net/txgbe: not in enabled drivers build config 00:03:42.109 net/vdev_netvsc: not in enabled drivers build config 00:03:42.109 net/vhost: not in enabled drivers build config 00:03:42.109 net/virtio: not in enabled drivers build config 00:03:42.109 net/vmxnet3: not in enabled drivers build config 00:03:42.109 raw/*: missing internal dependency, "rawdev" 00:03:42.109 crypto/armv8: not in enabled drivers build config 00:03:42.109 crypto/bcmfs: not in enabled drivers build config 00:03:42.109 crypto/caam_jr: not in enabled drivers build config 00:03:42.109 crypto/ccp: not in enabled drivers build config 00:03:42.109 crypto/cnxk: not in enabled drivers build config 00:03:42.109 crypto/dpaa_sec: not in enabled drivers build config 00:03:42.109 crypto/dpaa2_sec: not in enabled drivers build config 00:03:42.109 crypto/ipsec_mb: not in enabled drivers build config 00:03:42.109 crypto/mlx5: not in enabled drivers build config 00:03:42.109 crypto/mvsam: not in enabled drivers build config 00:03:42.109 crypto/nitrox: not in enabled drivers build config 00:03:42.109 crypto/null: not in enabled drivers build config 00:03:42.109 crypto/octeontx: not in enabled drivers build config 00:03:42.109 crypto/openssl: not in enabled drivers build config 00:03:42.109 crypto/scheduler: not in enabled drivers build config 00:03:42.109 crypto/uadk: not in enabled drivers build config 00:03:42.109 crypto/virtio: not in enabled drivers build config 00:03:42.109 compress/isal: not in enabled drivers build config 00:03:42.110 compress/mlx5: not in enabled drivers build config 00:03:42.110 compress/nitrox: not in enabled drivers build config 00:03:42.110 compress/octeontx: not in enabled drivers build config 00:03:42.110 compress/zlib: not in enabled drivers build config 00:03:42.110 regex/*: missing internal dependency, "regexdev" 00:03:42.110 ml/*: missing internal dependency, "mldev" 00:03:42.110 vdpa/ifc: not in enabled drivers build config 00:03:42.110 vdpa/mlx5: not in enabled drivers build config 00:03:42.110 vdpa/nfp: not in enabled drivers build config 00:03:42.110 vdpa/sfc: not in enabled drivers build config 00:03:42.110 event/*: missing internal dependency, "eventdev" 00:03:42.110 baseband/*: missing internal dependency, "bbdev" 00:03:42.110 gpu/*: missing internal dependency, "gpudev" 00:03:42.110 00:03:42.110 00:03:42.110 Build targets in project: 84 00:03:42.110 00:03:42.110 DPDK 24.03.0 00:03:42.110 00:03:42.110 User defined options 00:03:42.110 buildtype : debug 00:03:42.110 default_library : shared 00:03:42.110 libdir : lib 00:03:42.110 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:03:42.110 b_sanitize : address 00:03:42.110 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:03:42.110 c_link_args : 00:03:42.110 cpu_instruction_set: native 00:03:42.110 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:03:42.110 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:03:42.110 enable_docs : false 00:03:42.110 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:03:42.110 enable_kmods : false 00:03:42.110 max_lcores : 128 00:03:42.110 tests : false 00:03:42.110 00:03:42.110 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:42.675 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:03:42.675 [1/267] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:03:42.675 [2/267] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:03:42.675 [3/267] Linking static target lib/librte_kvargs.a 00:03:42.675 [4/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:03:42.675 [5/267] Compiling C object lib/librte_log.a.p/log_log.c.o 00:03:42.675 [6/267] Linking static target lib/librte_log.a 00:03:42.932 [7/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:03:42.933 [8/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:03:42.933 [9/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:03:42.933 [10/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:03:42.933 [11/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:03:42.933 [12/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:03:42.933 [13/267] Linking static target lib/librte_telemetry.a 00:03:42.933 [14/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:03:42.933 [15/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:03:43.189 [16/267] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:03:43.189 [17/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:03:43.189 [18/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:03:43.447 [19/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:03:43.447 [20/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:03:43.447 [21/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:03:43.447 [22/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:03:43.447 [23/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:03:43.447 [24/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:03:43.447 [25/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:03:43.447 [26/267] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:03:43.447 [27/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:03:43.705 [28/267] Linking target lib/librte_log.so.24.1 00:03:43.705 [29/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:03:43.705 [30/267] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:03:43.705 [31/267] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:03:43.705 [32/267] Linking target lib/librte_kvargs.so.24.1 00:03:43.705 [33/267] Linking target lib/librte_telemetry.so.24.1 00:03:43.705 [34/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:03:43.705 [35/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:03:43.705 [36/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:03:43.963 [37/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:03:43.963 [38/267] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:03:43.963 [39/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:03:43.963 [40/267] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:03:43.963 [41/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:03:43.963 [42/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:03:43.963 [43/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:03:43.963 [44/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:03:43.963 [45/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:03:44.221 [46/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:03:44.221 [47/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:03:44.221 [48/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:03:44.221 [49/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:03:44.221 [50/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:03:44.221 [51/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:03:44.479 [52/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:03:44.479 [53/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:03:44.479 [54/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:03:44.479 [55/267] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:03:44.479 [56/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:03:44.479 [57/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:03:44.479 [58/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:03:44.479 [59/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:03:44.782 [60/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:03:44.782 [61/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:03:44.782 [62/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:03:44.782 [63/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:03:44.782 [64/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:03:44.782 [65/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:03:44.782 [66/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:03:45.061 [67/267] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:03:45.061 [68/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:03:45.061 [69/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:03:45.061 [70/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:03:45.061 [71/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:03:45.061 [72/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:03:45.061 [73/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:03:45.319 [74/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:03:45.319 [75/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:03:45.319 [76/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:03:45.319 [77/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:03:45.319 [78/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:03:45.319 [79/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:03:45.577 [80/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:03:45.577 [81/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:03:45.577 [82/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:03:45.577 [83/267] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:03:45.577 [84/267] Linking static target lib/librte_ring.a 00:03:45.577 [85/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:03:45.836 [86/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:03:45.836 [87/267] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:03:45.836 [88/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:03:45.836 [89/267] Linking static target lib/librte_eal.a 00:03:45.836 [90/267] Linking static target lib/librte_rcu.a 00:03:45.836 [91/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:03:45.836 [92/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:03:45.836 [93/267] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:03:45.836 [94/267] Linking static target lib/librte_mempool.a 00:03:46.094 [95/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:03:46.094 [96/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:03:46.094 [97/267] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:03:46.094 [98/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:03:46.094 [99/267] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:03:46.094 [100/267] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:03:46.352 [101/267] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:03:46.353 [102/267] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:03:46.353 [103/267] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:03:46.353 [104/267] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:03:46.353 [105/267] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:03:46.353 [106/267] Linking static target lib/librte_meter.a 00:03:46.611 [107/267] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:03:46.611 [108/267] Linking static target lib/librte_net.a 00:03:46.611 [109/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:03:46.611 [110/267] Linking static target lib/librte_mbuf.a 00:03:46.611 [111/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:03:46.611 [112/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:03:46.611 [113/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:03:46.611 [114/267] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:03:46.869 [115/267] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:03:46.869 [116/267] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:03:46.869 [117/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:03:46.869 [118/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:03:47.127 [119/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:03:47.127 [120/267] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:03:47.127 [121/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:03:47.386 [122/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:03:47.386 [123/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:03:47.386 [124/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:03:47.386 [125/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:03:47.386 [126/267] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:03:47.386 [127/267] Linking static target lib/librte_pci.a 00:03:47.386 [128/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:03:47.644 [129/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:03:47.644 [130/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:03:47.644 [131/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:03:47.644 [132/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:03:47.644 [133/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:03:47.644 [134/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:03:47.644 [135/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:03:47.644 [136/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:03:47.644 [137/267] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:47.644 [138/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:03:47.644 [139/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:03:47.644 [140/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:03:47.644 [141/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:03:47.902 [142/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:03:47.902 [143/267] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:03:47.902 [144/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:03:47.902 [145/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:03:47.902 [146/267] Linking static target lib/librte_cmdline.a 00:03:47.902 [147/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:03:47.902 [148/267] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:03:48.160 [149/267] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:03:48.160 [150/267] Linking static target lib/librte_timer.a 00:03:48.160 [151/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:03:48.160 [152/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:03:48.161 [153/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:03:48.161 [154/267] Linking static target lib/librte_ethdev.a 00:03:48.161 [155/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:03:48.419 [156/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:03:48.419 [157/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:03:48.419 [158/267] Linking static target lib/librte_compressdev.a 00:03:48.419 [159/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:03:48.419 [160/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:03:48.419 [161/267] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:03:48.419 [162/267] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:03:48.677 [163/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:03:48.677 [164/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:03:48.677 [165/267] Linking static target lib/librte_dmadev.a 00:03:48.677 [166/267] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:03:48.677 [167/267] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:03:48.677 [168/267] Linking static target lib/librte_hash.a 00:03:48.935 [169/267] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:03:48.935 [170/267] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:03:48.935 [171/267] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:03:48.935 [172/267] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:49.194 [173/267] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:03:49.194 [174/267] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:49.194 [175/267] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:03:49.194 [176/267] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:03:49.194 [177/267] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:03:49.194 [178/267] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:03:49.453 [179/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:03:49.453 [180/267] Linking static target lib/librte_cryptodev.a 00:03:49.453 [181/267] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:03:49.453 [182/267] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:49.453 [183/267] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:03:49.711 [184/267] Linking static target lib/librte_power.a 00:03:49.711 [185/267] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:03:49.711 [186/267] Linking static target lib/librte_reorder.a 00:03:49.711 [187/267] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:03:49.711 [188/267] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:03:49.711 [189/267] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:03:49.970 [190/267] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:03:49.970 [191/267] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:03:49.970 [192/267] Linking static target lib/librte_security.a 00:03:49.970 [193/267] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:03:50.537 [194/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:03:50.537 [195/267] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:03:50.537 [196/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:03:50.537 [197/267] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:03:50.537 [198/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:03:50.537 [199/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:03:50.796 [200/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:03:50.796 [201/267] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:03:50.796 [202/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:03:50.796 [203/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:03:51.054 [204/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:03:51.054 [205/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:03:51.054 [206/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:03:51.054 [207/267] Linking static target drivers/libtmp_rte_bus_vdev.a 00:03:51.054 [208/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:03:51.054 [209/267] Linking static target drivers/libtmp_rte_bus_pci.a 00:03:51.054 [210/267] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:51.313 [211/267] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:03:51.313 [212/267] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:51.313 [213/267] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:03:51.313 [214/267] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:51.313 [215/267] Linking static target drivers/librte_bus_vdev.a 00:03:51.313 [216/267] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:51.313 [217/267] Linking static target drivers/librte_bus_pci.a 00:03:51.313 [218/267] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:51.571 [219/267] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:03:51.571 [220/267] Linking static target drivers/libtmp_rte_mempool_ring.a 00:03:51.571 [221/267] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:51.571 [222/267] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:03:51.571 [223/267] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:51.571 [224/267] Linking static target drivers/librte_mempool_ring.a 00:03:51.571 [225/267] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:51.829 [226/267] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:51.829 [227/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:03:53.202 [228/267] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:03:53.202 [229/267] Linking target lib/librte_eal.so.24.1 00:03:53.202 [230/267] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:03:53.202 [231/267] Linking target lib/librte_meter.so.24.1 00:03:53.202 [232/267] Linking target lib/librte_timer.so.24.1 00:03:53.202 [233/267] Linking target lib/librte_pci.so.24.1 00:03:53.202 [234/267] Linking target drivers/librte_bus_vdev.so.24.1 00:03:53.202 [235/267] Linking target lib/librte_dmadev.so.24.1 00:03:53.202 [236/267] Linking target lib/librte_ring.so.24.1 00:03:53.202 [237/267] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:03:53.202 [238/267] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:03:53.202 [239/267] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:03:53.460 [240/267] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:03:53.460 [241/267] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:03:53.460 [242/267] Linking target lib/librte_rcu.so.24.1 00:03:53.460 [243/267] Linking target lib/librte_mempool.so.24.1 00:03:53.460 [244/267] Linking target drivers/librte_bus_pci.so.24.1 00:03:53.460 [245/267] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:03:53.460 [246/267] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:03:53.460 [247/267] Linking target drivers/librte_mempool_ring.so.24.1 00:03:53.460 [248/267] Linking target lib/librte_mbuf.so.24.1 00:03:53.756 [249/267] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:03:53.756 [250/267] Linking target lib/librte_reorder.so.24.1 00:03:53.756 [251/267] Linking target lib/librte_net.so.24.1 00:03:53.756 [252/267] Linking target lib/librte_compressdev.so.24.1 00:03:53.756 [253/267] Linking target lib/librte_cryptodev.so.24.1 00:03:53.756 [254/267] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:53.756 [255/267] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:03:53.756 [256/267] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:03:53.756 [257/267] Linking target lib/librte_hash.so.24.1 00:03:53.756 [258/267] Linking target lib/librte_security.so.24.1 00:03:53.756 [259/267] Linking target lib/librte_cmdline.so.24.1 00:03:53.756 [260/267] Linking target lib/librte_ethdev.so.24.1 00:03:54.020 [261/267] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:03:54.020 [262/267] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:03:54.020 [263/267] Linking target lib/librte_power.so.24.1 00:03:54.954 [264/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:54.954 [265/267] Linking static target lib/librte_vhost.a 00:03:55.924 [266/267] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:55.924 [267/267] Linking target lib/librte_vhost.so.24.1 00:03:55.924 INFO: autodetecting backend as ninja 00:03:55.924 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:04:10.798 CC lib/ut_mock/mock.o 00:04:10.798 CC lib/log/log.o 00:04:10.798 CC lib/log/log_flags.o 00:04:10.798 CC lib/log/log_deprecated.o 00:04:10.798 CC lib/ut/ut.o 00:04:10.798 LIB libspdk_ut.a 00:04:10.798 LIB libspdk_log.a 00:04:10.798 LIB libspdk_ut_mock.a 00:04:10.798 SO libspdk_ut.so.2.0 00:04:10.798 SO libspdk_ut_mock.so.6.0 00:04:10.798 SO libspdk_log.so.7.1 00:04:10.798 SYMLINK libspdk_ut_mock.so 00:04:10.798 SYMLINK libspdk_ut.so 00:04:10.798 SYMLINK libspdk_log.so 00:04:10.798 CC lib/dma/dma.o 00:04:10.798 CC lib/util/bit_array.o 00:04:10.798 CC lib/util/base64.o 00:04:10.798 CC lib/util/crc32.o 00:04:10.798 CC lib/util/cpuset.o 00:04:10.798 CC lib/util/crc16.o 00:04:10.798 CC lib/ioat/ioat.o 00:04:10.798 CC lib/util/crc32c.o 00:04:10.798 CXX lib/trace_parser/trace.o 00:04:10.798 CC lib/vfio_user/host/vfio_user_pci.o 00:04:10.798 CC lib/util/crc32_ieee.o 00:04:10.798 CC lib/util/crc64.o 00:04:10.798 CC lib/vfio_user/host/vfio_user.o 00:04:10.798 CC lib/util/dif.o 00:04:10.798 CC lib/util/fd.o 00:04:10.798 CC lib/util/fd_group.o 00:04:10.798 CC lib/util/file.o 00:04:10.798 LIB libspdk_dma.a 00:04:10.798 CC lib/util/hexlify.o 00:04:10.798 SO libspdk_dma.so.5.0 00:04:10.798 LIB libspdk_ioat.a 00:04:10.798 SO libspdk_ioat.so.7.0 00:04:10.798 SYMLINK libspdk_dma.so 00:04:10.798 CC lib/util/iov.o 00:04:10.798 CC lib/util/math.o 00:04:10.798 CC lib/util/net.o 00:04:10.798 LIB libspdk_vfio_user.a 00:04:10.798 SYMLINK libspdk_ioat.so 00:04:10.798 CC lib/util/pipe.o 00:04:10.798 CC lib/util/strerror_tls.o 00:04:10.798 SO libspdk_vfio_user.so.5.0 00:04:10.798 CC lib/util/string.o 00:04:10.798 SYMLINK libspdk_vfio_user.so 00:04:10.798 CC lib/util/uuid.o 00:04:10.798 CC lib/util/xor.o 00:04:10.798 CC lib/util/zipf.o 00:04:10.798 CC lib/util/md5.o 00:04:10.798 LIB libspdk_util.a 00:04:10.798 SO libspdk_util.so.10.1 00:04:10.798 LIB libspdk_trace_parser.a 00:04:10.798 SO libspdk_trace_parser.so.6.0 00:04:10.798 SYMLINK libspdk_util.so 00:04:10.798 SYMLINK libspdk_trace_parser.so 00:04:10.798 CC lib/json/json_parse.o 00:04:10.798 CC lib/json/json_util.o 00:04:10.798 CC lib/json/json_write.o 00:04:10.798 CC lib/conf/conf.o 00:04:10.798 CC lib/rdma_utils/rdma_utils.o 00:04:10.798 CC lib/vmd/vmd.o 00:04:10.798 CC lib/vmd/led.o 00:04:10.798 CC lib/idxd/idxd_user.o 00:04:10.798 CC lib/idxd/idxd.o 00:04:10.798 CC lib/env_dpdk/env.o 00:04:10.798 CC lib/env_dpdk/memory.o 00:04:11.059 LIB libspdk_conf.a 00:04:11.059 SO libspdk_conf.so.6.0 00:04:11.059 CC lib/env_dpdk/pci.o 00:04:11.059 CC lib/env_dpdk/init.o 00:04:11.059 CC lib/env_dpdk/threads.o 00:04:11.059 LIB libspdk_rdma_utils.a 00:04:11.059 LIB libspdk_json.a 00:04:11.059 SO libspdk_rdma_utils.so.1.0 00:04:11.060 SYMLINK libspdk_conf.so 00:04:11.060 CC lib/env_dpdk/pci_ioat.o 00:04:11.060 SO libspdk_json.so.6.0 00:04:11.060 SYMLINK libspdk_rdma_utils.so 00:04:11.060 CC lib/idxd/idxd_kernel.o 00:04:11.060 SYMLINK libspdk_json.so 00:04:11.060 CC lib/env_dpdk/pci_virtio.o 00:04:11.060 CC lib/env_dpdk/pci_vmd.o 00:04:11.319 CC lib/rdma_provider/common.o 00:04:11.319 CC lib/env_dpdk/pci_idxd.o 00:04:11.319 CC lib/env_dpdk/pci_event.o 00:04:11.319 LIB libspdk_vmd.a 00:04:11.319 SO libspdk_vmd.so.6.0 00:04:11.319 CC lib/env_dpdk/sigbus_handler.o 00:04:11.319 CC lib/rdma_provider/rdma_provider_verbs.o 00:04:11.319 CC lib/env_dpdk/pci_dpdk.o 00:04:11.319 SYMLINK libspdk_vmd.so 00:04:11.319 CC lib/env_dpdk/pci_dpdk_2207.o 00:04:11.319 CC lib/jsonrpc/jsonrpc_server.o 00:04:11.319 LIB libspdk_idxd.a 00:04:11.319 CC lib/env_dpdk/pci_dpdk_2211.o 00:04:11.319 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:04:11.319 SO libspdk_idxd.so.12.1 00:04:11.319 CC lib/jsonrpc/jsonrpc_client.o 00:04:11.578 SYMLINK libspdk_idxd.so 00:04:11.579 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:04:11.579 LIB libspdk_rdma_provider.a 00:04:11.579 SO libspdk_rdma_provider.so.7.0 00:04:11.579 SYMLINK libspdk_rdma_provider.so 00:04:11.579 LIB libspdk_jsonrpc.a 00:04:11.579 SO libspdk_jsonrpc.so.6.0 00:04:11.837 SYMLINK libspdk_jsonrpc.so 00:04:12.096 CC lib/rpc/rpc.o 00:04:12.096 LIB libspdk_env_dpdk.a 00:04:12.096 LIB libspdk_rpc.a 00:04:12.096 SO libspdk_rpc.so.6.0 00:04:12.354 SO libspdk_env_dpdk.so.15.1 00:04:12.354 SYMLINK libspdk_rpc.so 00:04:12.354 SYMLINK libspdk_env_dpdk.so 00:04:12.354 CC lib/trace/trace_flags.o 00:04:12.354 CC lib/trace/trace_rpc.o 00:04:12.354 CC lib/trace/trace.o 00:04:12.354 CC lib/keyring/keyring.o 00:04:12.354 CC lib/keyring/keyring_rpc.o 00:04:12.354 CC lib/notify/notify_rpc.o 00:04:12.354 CC lib/notify/notify.o 00:04:12.613 LIB libspdk_notify.a 00:04:12.613 SO libspdk_notify.so.6.0 00:04:12.613 LIB libspdk_trace.a 00:04:12.613 SYMLINK libspdk_notify.so 00:04:12.613 LIB libspdk_keyring.a 00:04:12.613 SO libspdk_trace.so.11.0 00:04:12.613 SO libspdk_keyring.so.2.0 00:04:12.613 SYMLINK libspdk_trace.so 00:04:12.871 SYMLINK libspdk_keyring.so 00:04:12.871 CC lib/thread/iobuf.o 00:04:12.871 CC lib/thread/thread.o 00:04:12.871 CC lib/sock/sock.o 00:04:12.871 CC lib/sock/sock_rpc.o 00:04:13.443 LIB libspdk_sock.a 00:04:13.443 SO libspdk_sock.so.10.0 00:04:13.443 SYMLINK libspdk_sock.so 00:04:13.704 CC lib/nvme/nvme_ctrlr_cmd.o 00:04:13.704 CC lib/nvme/nvme_ns_cmd.o 00:04:13.704 CC lib/nvme/nvme_ctrlr.o 00:04:13.704 CC lib/nvme/nvme_fabric.o 00:04:13.704 CC lib/nvme/nvme_pcie_common.o 00:04:13.704 CC lib/nvme/nvme_qpair.o 00:04:13.704 CC lib/nvme/nvme_ns.o 00:04:13.704 CC lib/nvme/nvme_pcie.o 00:04:13.704 CC lib/nvme/nvme.o 00:04:14.274 CC lib/nvme/nvme_quirks.o 00:04:14.274 CC lib/nvme/nvme_transport.o 00:04:14.274 CC lib/nvme/nvme_discovery.o 00:04:14.274 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:04:14.274 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:04:14.534 LIB libspdk_thread.a 00:04:14.534 CC lib/nvme/nvme_tcp.o 00:04:14.534 SO libspdk_thread.so.11.0 00:04:14.534 CC lib/nvme/nvme_opal.o 00:04:14.534 CC lib/nvme/nvme_io_msg.o 00:04:14.534 SYMLINK libspdk_thread.so 00:04:14.534 CC lib/nvme/nvme_poll_group.o 00:04:14.534 CC lib/nvme/nvme_zns.o 00:04:14.794 CC lib/nvme/nvme_stubs.o 00:04:14.794 CC lib/nvme/nvme_auth.o 00:04:14.794 CC lib/nvme/nvme_cuse.o 00:04:15.052 CC lib/nvme/nvme_rdma.o 00:04:15.052 CC lib/accel/accel.o 00:04:15.052 CC lib/blob/blobstore.o 00:04:15.312 CC lib/init/json_config.o 00:04:15.312 CC lib/virtio/virtio.o 00:04:15.312 CC lib/fsdev/fsdev.o 00:04:15.607 CC lib/init/subsystem.o 00:04:15.607 CC lib/virtio/virtio_vhost_user.o 00:04:15.607 CC lib/init/subsystem_rpc.o 00:04:15.607 CC lib/init/rpc.o 00:04:15.607 CC lib/accel/accel_rpc.o 00:04:15.607 CC lib/accel/accel_sw.o 00:04:15.607 CC lib/blob/request.o 00:04:15.889 CC lib/virtio/virtio_vfio_user.o 00:04:15.889 LIB libspdk_init.a 00:04:15.889 SO libspdk_init.so.6.0 00:04:15.889 CC lib/virtio/virtio_pci.o 00:04:15.889 SYMLINK libspdk_init.so 00:04:15.889 CC lib/fsdev/fsdev_io.o 00:04:15.889 CC lib/fsdev/fsdev_rpc.o 00:04:15.889 CC lib/blob/zeroes.o 00:04:15.889 CC lib/blob/blob_bs_dev.o 00:04:15.889 CC lib/event/app.o 00:04:15.889 CC lib/event/reactor.o 00:04:15.889 CC lib/event/log_rpc.o 00:04:16.148 CC lib/event/app_rpc.o 00:04:16.148 LIB libspdk_nvme.a 00:04:16.148 LIB libspdk_virtio.a 00:04:16.148 SO libspdk_virtio.so.7.0 00:04:16.148 LIB libspdk_accel.a 00:04:16.148 CC lib/event/scheduler_static.o 00:04:16.148 SO libspdk_nvme.so.15.0 00:04:16.148 SYMLINK libspdk_virtio.so 00:04:16.148 LIB libspdk_fsdev.a 00:04:16.148 SO libspdk_accel.so.16.0 00:04:16.148 SO libspdk_fsdev.so.2.0 00:04:16.406 SYMLINK libspdk_fsdev.so 00:04:16.406 SYMLINK libspdk_accel.so 00:04:16.406 SYMLINK libspdk_nvme.so 00:04:16.406 LIB libspdk_event.a 00:04:16.406 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:04:16.406 SO libspdk_event.so.14.0 00:04:16.406 CC lib/bdev/bdev.o 00:04:16.406 CC lib/bdev/bdev_rpc.o 00:04:16.406 CC lib/bdev/bdev_zone.o 00:04:16.406 CC lib/bdev/part.o 00:04:16.406 CC lib/bdev/scsi_nvme.o 00:04:16.406 SYMLINK libspdk_event.so 00:04:17.340 LIB libspdk_fuse_dispatcher.a 00:04:17.340 SO libspdk_fuse_dispatcher.so.1.0 00:04:17.340 SYMLINK libspdk_fuse_dispatcher.so 00:04:17.907 LIB libspdk_blob.a 00:04:18.165 SO libspdk_blob.so.12.0 00:04:18.165 SYMLINK libspdk_blob.so 00:04:18.423 CC lib/blobfs/tree.o 00:04:18.423 CC lib/blobfs/blobfs.o 00:04:18.423 CC lib/lvol/lvol.o 00:04:18.682 LIB libspdk_bdev.a 00:04:18.682 SO libspdk_bdev.so.17.0 00:04:18.941 SYMLINK libspdk_bdev.so 00:04:18.941 CC lib/nvmf/ctrlr.o 00:04:18.941 CC lib/nvmf/ctrlr_discovery.o 00:04:18.941 CC lib/nvmf/ctrlr_bdev.o 00:04:18.941 CC lib/nvmf/subsystem.o 00:04:18.941 CC lib/scsi/dev.o 00:04:18.941 CC lib/nbd/nbd.o 00:04:18.941 CC lib/ublk/ublk.o 00:04:18.941 CC lib/ftl/ftl_core.o 00:04:19.199 LIB libspdk_lvol.a 00:04:19.199 SO libspdk_lvol.so.11.0 00:04:19.199 SYMLINK libspdk_lvol.so 00:04:19.199 CC lib/ftl/ftl_init.o 00:04:19.199 LIB libspdk_blobfs.a 00:04:19.199 SO libspdk_blobfs.so.11.0 00:04:19.199 CC lib/scsi/lun.o 00:04:19.458 CC lib/scsi/port.o 00:04:19.458 SYMLINK libspdk_blobfs.so 00:04:19.458 CC lib/scsi/scsi.o 00:04:19.458 CC lib/scsi/scsi_bdev.o 00:04:19.458 CC lib/nbd/nbd_rpc.o 00:04:19.458 CC lib/ftl/ftl_layout.o 00:04:19.458 CC lib/ftl/ftl_debug.o 00:04:19.458 CC lib/ftl/ftl_io.o 00:04:19.458 LIB libspdk_nbd.a 00:04:19.458 SO libspdk_nbd.so.7.0 00:04:19.458 CC lib/ublk/ublk_rpc.o 00:04:19.716 CC lib/scsi/scsi_pr.o 00:04:19.716 SYMLINK libspdk_nbd.so 00:04:19.716 CC lib/scsi/scsi_rpc.o 00:04:19.716 CC lib/scsi/task.o 00:04:19.716 CC lib/ftl/ftl_sb.o 00:04:19.716 CC lib/ftl/ftl_l2p.o 00:04:19.716 LIB libspdk_ublk.a 00:04:19.716 CC lib/nvmf/nvmf.o 00:04:19.716 CC lib/ftl/ftl_l2p_flat.o 00:04:19.716 SO libspdk_ublk.so.3.0 00:04:19.716 CC lib/ftl/ftl_nv_cache.o 00:04:19.716 SYMLINK libspdk_ublk.so 00:04:19.716 CC lib/ftl/ftl_band.o 00:04:19.716 CC lib/ftl/ftl_band_ops.o 00:04:19.716 CC lib/ftl/ftl_writer.o 00:04:19.975 CC lib/ftl/ftl_rq.o 00:04:19.975 LIB libspdk_scsi.a 00:04:19.975 CC lib/ftl/ftl_reloc.o 00:04:19.975 SO libspdk_scsi.so.9.0 00:04:19.975 SYMLINK libspdk_scsi.so 00:04:19.975 CC lib/ftl/ftl_l2p_cache.o 00:04:19.975 CC lib/nvmf/nvmf_rpc.o 00:04:19.975 CC lib/nvmf/transport.o 00:04:19.975 CC lib/ftl/ftl_p2l.o 00:04:20.234 CC lib/nvmf/tcp.o 00:04:20.234 CC lib/ftl/ftl_p2l_log.o 00:04:20.493 CC lib/nvmf/stubs.o 00:04:20.493 CC lib/iscsi/conn.o 00:04:20.493 CC lib/iscsi/init_grp.o 00:04:20.493 CC lib/iscsi/iscsi.o 00:04:20.493 CC lib/ftl/mngt/ftl_mngt.o 00:04:20.752 CC lib/nvmf/mdns_server.o 00:04:20.752 CC lib/iscsi/param.o 00:04:20.752 CC lib/vhost/vhost.o 00:04:20.752 CC lib/vhost/vhost_rpc.o 00:04:20.752 CC lib/iscsi/portal_grp.o 00:04:20.752 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:04:20.752 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:04:21.011 CC lib/nvmf/rdma.o 00:04:21.011 CC lib/iscsi/tgt_node.o 00:04:21.011 CC lib/iscsi/iscsi_subsystem.o 00:04:21.011 CC lib/ftl/mngt/ftl_mngt_startup.o 00:04:21.011 CC lib/iscsi/iscsi_rpc.o 00:04:21.011 CC lib/iscsi/task.o 00:04:21.011 CC lib/vhost/vhost_scsi.o 00:04:21.269 CC lib/ftl/mngt/ftl_mngt_md.o 00:04:21.269 CC lib/ftl/mngt/ftl_mngt_misc.o 00:04:21.269 CC lib/nvmf/auth.o 00:04:21.269 CC lib/vhost/vhost_blk.o 00:04:21.269 CC lib/vhost/rte_vhost_user.o 00:04:21.269 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:04:21.527 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:04:21.527 CC lib/ftl/mngt/ftl_mngt_band.o 00:04:21.528 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:04:21.528 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:04:21.528 LIB libspdk_iscsi.a 00:04:21.786 SO libspdk_iscsi.so.8.0 00:04:21.786 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:04:21.786 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:04:21.786 SYMLINK libspdk_iscsi.so 00:04:21.786 CC lib/ftl/utils/ftl_conf.o 00:04:21.786 CC lib/ftl/utils/ftl_md.o 00:04:21.786 CC lib/ftl/utils/ftl_mempool.o 00:04:21.786 CC lib/ftl/utils/ftl_bitmap.o 00:04:21.786 CC lib/ftl/utils/ftl_property.o 00:04:21.786 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:04:22.043 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:04:22.043 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:04:22.043 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:04:22.043 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:04:22.043 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:04:22.043 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:04:22.043 CC lib/ftl/upgrade/ftl_sb_v3.o 00:04:22.043 CC lib/ftl/upgrade/ftl_sb_v5.o 00:04:22.043 CC lib/ftl/nvc/ftl_nvc_dev.o 00:04:22.043 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:04:22.043 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:04:22.301 LIB libspdk_vhost.a 00:04:22.301 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:04:22.301 CC lib/ftl/base/ftl_base_dev.o 00:04:22.301 CC lib/ftl/base/ftl_base_bdev.o 00:04:22.301 SO libspdk_vhost.so.8.0 00:04:22.301 CC lib/ftl/ftl_trace.o 00:04:22.301 SYMLINK libspdk_vhost.so 00:04:22.582 LIB libspdk_ftl.a 00:04:22.582 SO libspdk_ftl.so.9.0 00:04:22.840 LIB libspdk_nvmf.a 00:04:22.840 SYMLINK libspdk_ftl.so 00:04:22.840 SO libspdk_nvmf.so.20.0 00:04:23.099 SYMLINK libspdk_nvmf.so 00:04:23.356 CC module/env_dpdk/env_dpdk_rpc.o 00:04:23.356 CC module/blob/bdev/blob_bdev.o 00:04:23.356 CC module/keyring/file/keyring.o 00:04:23.356 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:04:23.356 CC module/keyring/linux/keyring.o 00:04:23.356 CC module/scheduler/dynamic/scheduler_dynamic.o 00:04:23.356 CC module/sock/posix/posix.o 00:04:23.356 CC module/fsdev/aio/fsdev_aio.o 00:04:23.356 CC module/accel/error/accel_error.o 00:04:23.356 CC module/scheduler/gscheduler/gscheduler.o 00:04:23.356 LIB libspdk_env_dpdk_rpc.a 00:04:23.356 SO libspdk_env_dpdk_rpc.so.6.0 00:04:23.356 SYMLINK libspdk_env_dpdk_rpc.so 00:04:23.356 CC module/keyring/linux/keyring_rpc.o 00:04:23.356 CC module/keyring/file/keyring_rpc.o 00:04:23.356 LIB libspdk_scheduler_dpdk_governor.a 00:04:23.356 CC module/fsdev/aio/fsdev_aio_rpc.o 00:04:23.356 SO libspdk_scheduler_dpdk_governor.so.4.0 00:04:23.356 LIB libspdk_scheduler_gscheduler.a 00:04:23.356 CC module/accel/error/accel_error_rpc.o 00:04:23.613 SO libspdk_scheduler_gscheduler.so.4.0 00:04:23.613 SYMLINK libspdk_scheduler_dpdk_governor.so 00:04:23.613 LIB libspdk_keyring_linux.a 00:04:23.613 LIB libspdk_scheduler_dynamic.a 00:04:23.613 CC module/fsdev/aio/linux_aio_mgr.o 00:04:23.613 SO libspdk_keyring_linux.so.1.0 00:04:23.613 LIB libspdk_keyring_file.a 00:04:23.613 SYMLINK libspdk_scheduler_gscheduler.so 00:04:23.613 SO libspdk_scheduler_dynamic.so.4.0 00:04:23.613 SO libspdk_keyring_file.so.2.0 00:04:23.613 SYMLINK libspdk_keyring_linux.so 00:04:23.613 SYMLINK libspdk_keyring_file.so 00:04:23.613 SYMLINK libspdk_scheduler_dynamic.so 00:04:23.613 LIB libspdk_blob_bdev.a 00:04:23.613 LIB libspdk_accel_error.a 00:04:23.613 SO libspdk_accel_error.so.2.0 00:04:23.613 SO libspdk_blob_bdev.so.12.0 00:04:23.613 SYMLINK libspdk_accel_error.so 00:04:23.613 SYMLINK libspdk_blob_bdev.so 00:04:23.613 CC module/accel/ioat/accel_ioat.o 00:04:23.613 CC module/accel/ioat/accel_ioat_rpc.o 00:04:23.613 CC module/accel/dsa/accel_dsa.o 00:04:23.613 CC module/accel/dsa/accel_dsa_rpc.o 00:04:23.613 CC module/accel/iaa/accel_iaa.o 00:04:23.613 CC module/accel/iaa/accel_iaa_rpc.o 00:04:23.873 LIB libspdk_accel_ioat.a 00:04:23.873 LIB libspdk_accel_iaa.a 00:04:23.873 SO libspdk_accel_ioat.so.6.0 00:04:23.873 SO libspdk_accel_iaa.so.3.0 00:04:23.873 CC module/blobfs/bdev/blobfs_bdev.o 00:04:23.873 CC module/bdev/delay/vbdev_delay.o 00:04:23.873 CC module/bdev/error/vbdev_error.o 00:04:23.873 CC module/bdev/gpt/gpt.o 00:04:23.873 SYMLINK libspdk_accel_ioat.so 00:04:23.873 CC module/bdev/gpt/vbdev_gpt.o 00:04:23.873 SYMLINK libspdk_accel_iaa.so 00:04:23.873 CC module/bdev/lvol/vbdev_lvol.o 00:04:23.873 LIB libspdk_accel_dsa.a 00:04:23.873 CC module/bdev/delay/vbdev_delay_rpc.o 00:04:23.873 SO libspdk_accel_dsa.so.5.0 00:04:23.873 LIB libspdk_sock_posix.a 00:04:23.873 SYMLINK libspdk_accel_dsa.so 00:04:23.873 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:04:23.873 SO libspdk_sock_posix.so.6.0 00:04:24.131 LIB libspdk_fsdev_aio.a 00:04:24.131 SO libspdk_fsdev_aio.so.1.0 00:04:24.131 SYMLINK libspdk_sock_posix.so 00:04:24.131 LIB libspdk_bdev_gpt.a 00:04:24.131 CC module/bdev/malloc/bdev_malloc.o 00:04:24.131 SYMLINK libspdk_fsdev_aio.so 00:04:24.131 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:04:24.131 CC module/bdev/error/vbdev_error_rpc.o 00:04:24.131 LIB libspdk_blobfs_bdev.a 00:04:24.131 CC module/bdev/malloc/bdev_malloc_rpc.o 00:04:24.131 SO libspdk_bdev_gpt.so.6.0 00:04:24.131 CC module/bdev/null/bdev_null.o 00:04:24.131 SO libspdk_blobfs_bdev.so.6.0 00:04:24.131 LIB libspdk_bdev_delay.a 00:04:24.131 SO libspdk_bdev_delay.so.6.0 00:04:24.131 SYMLINK libspdk_bdev_gpt.so 00:04:24.131 SYMLINK libspdk_blobfs_bdev.so 00:04:24.131 CC module/bdev/nvme/bdev_nvme.o 00:04:24.131 LIB libspdk_bdev_error.a 00:04:24.131 SYMLINK libspdk_bdev_delay.so 00:04:24.131 CC module/bdev/null/bdev_null_rpc.o 00:04:24.131 SO libspdk_bdev_error.so.6.0 00:04:24.388 CC module/bdev/nvme/bdev_nvme_rpc.o 00:04:24.388 SYMLINK libspdk_bdev_error.so 00:04:24.388 CC module/bdev/nvme/nvme_rpc.o 00:04:24.388 CC module/bdev/raid/bdev_raid.o 00:04:24.388 CC module/bdev/passthru/vbdev_passthru.o 00:04:24.388 CC module/bdev/raid/bdev_raid_rpc.o 00:04:24.388 CC module/bdev/split/vbdev_split.o 00:04:24.388 LIB libspdk_bdev_null.a 00:04:24.388 SO libspdk_bdev_null.so.6.0 00:04:24.388 LIB libspdk_bdev_lvol.a 00:04:24.388 LIB libspdk_bdev_malloc.a 00:04:24.388 SO libspdk_bdev_lvol.so.6.0 00:04:24.388 SO libspdk_bdev_malloc.so.6.0 00:04:24.388 SYMLINK libspdk_bdev_null.so 00:04:24.388 SYMLINK libspdk_bdev_lvol.so 00:04:24.388 SYMLINK libspdk_bdev_malloc.so 00:04:24.388 CC module/bdev/split/vbdev_split_rpc.o 00:04:24.645 CC module/bdev/nvme/bdev_mdns_client.o 00:04:24.645 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:04:24.645 CC module/bdev/zone_block/vbdev_zone_block.o 00:04:24.645 CC module/bdev/xnvme/bdev_xnvme.o 00:04:24.645 LIB libspdk_bdev_split.a 00:04:24.645 CC module/bdev/xnvme/bdev_xnvme_rpc.o 00:04:24.645 CC module/bdev/aio/bdev_aio.o 00:04:24.645 CC module/bdev/ftl/bdev_ftl.o 00:04:24.645 SO libspdk_bdev_split.so.6.0 00:04:24.645 LIB libspdk_bdev_passthru.a 00:04:24.645 SO libspdk_bdev_passthru.so.6.0 00:04:24.645 SYMLINK libspdk_bdev_split.so 00:04:24.645 CC module/bdev/ftl/bdev_ftl_rpc.o 00:04:24.645 SYMLINK libspdk_bdev_passthru.so 00:04:24.645 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:04:24.903 LIB libspdk_bdev_xnvme.a 00:04:24.903 SO libspdk_bdev_xnvme.so.3.0 00:04:24.903 CC module/bdev/iscsi/bdev_iscsi.o 00:04:24.903 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:04:24.903 LIB libspdk_bdev_zone_block.a 00:04:24.903 SYMLINK libspdk_bdev_xnvme.so 00:04:24.903 CC module/bdev/raid/bdev_raid_sb.o 00:04:24.903 SO libspdk_bdev_zone_block.so.6.0 00:04:24.903 CC module/bdev/raid/raid0.o 00:04:24.903 CC module/bdev/aio/bdev_aio_rpc.o 00:04:24.903 LIB libspdk_bdev_ftl.a 00:04:24.903 SO libspdk_bdev_ftl.so.6.0 00:04:24.903 SYMLINK libspdk_bdev_zone_block.so 00:04:25.162 CC module/bdev/raid/raid1.o 00:04:25.162 CC module/bdev/nvme/vbdev_opal.o 00:04:25.162 SYMLINK libspdk_bdev_ftl.so 00:04:25.162 CC module/bdev/nvme/vbdev_opal_rpc.o 00:04:25.162 LIB libspdk_bdev_aio.a 00:04:25.162 SO libspdk_bdev_aio.so.6.0 00:04:25.162 CC module/bdev/virtio/bdev_virtio_scsi.o 00:04:25.162 CC module/bdev/raid/concat.o 00:04:25.162 SYMLINK libspdk_bdev_aio.so 00:04:25.162 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:04:25.162 CC module/bdev/virtio/bdev_virtio_blk.o 00:04:25.162 LIB libspdk_bdev_iscsi.a 00:04:25.162 SO libspdk_bdev_iscsi.so.6.0 00:04:25.162 CC module/bdev/virtio/bdev_virtio_rpc.o 00:04:25.421 SYMLINK libspdk_bdev_iscsi.so 00:04:25.421 LIB libspdk_bdev_raid.a 00:04:25.421 SO libspdk_bdev_raid.so.6.0 00:04:25.421 SYMLINK libspdk_bdev_raid.so 00:04:25.679 LIB libspdk_bdev_virtio.a 00:04:25.679 SO libspdk_bdev_virtio.so.6.0 00:04:25.679 SYMLINK libspdk_bdev_virtio.so 00:04:26.611 LIB libspdk_bdev_nvme.a 00:04:26.611 SO libspdk_bdev_nvme.so.7.1 00:04:26.611 SYMLINK libspdk_bdev_nvme.so 00:04:27.177 CC module/event/subsystems/vmd/vmd.o 00:04:27.177 CC module/event/subsystems/vmd/vmd_rpc.o 00:04:27.177 CC module/event/subsystems/iobuf/iobuf.o 00:04:27.177 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:04:27.178 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:04:27.178 CC module/event/subsystems/scheduler/scheduler.o 00:04:27.178 CC module/event/subsystems/fsdev/fsdev.o 00:04:27.178 CC module/event/subsystems/keyring/keyring.o 00:04:27.178 CC module/event/subsystems/sock/sock.o 00:04:27.178 LIB libspdk_event_sock.a 00:04:27.178 LIB libspdk_event_vhost_blk.a 00:04:27.178 LIB libspdk_event_keyring.a 00:04:27.178 LIB libspdk_event_fsdev.a 00:04:27.178 LIB libspdk_event_scheduler.a 00:04:27.178 LIB libspdk_event_vmd.a 00:04:27.178 SO libspdk_event_sock.so.5.0 00:04:27.178 SO libspdk_event_vhost_blk.so.3.0 00:04:27.178 SO libspdk_event_keyring.so.1.0 00:04:27.178 SO libspdk_event_fsdev.so.1.0 00:04:27.178 LIB libspdk_event_iobuf.a 00:04:27.178 SO libspdk_event_scheduler.so.4.0 00:04:27.178 SO libspdk_event_vmd.so.6.0 00:04:27.178 SO libspdk_event_iobuf.so.3.0 00:04:27.178 SYMLINK libspdk_event_sock.so 00:04:27.178 SYMLINK libspdk_event_keyring.so 00:04:27.178 SYMLINK libspdk_event_vhost_blk.so 00:04:27.178 SYMLINK libspdk_event_fsdev.so 00:04:27.178 SYMLINK libspdk_event_scheduler.so 00:04:27.178 SYMLINK libspdk_event_vmd.so 00:04:27.178 SYMLINK libspdk_event_iobuf.so 00:04:27.436 CC module/event/subsystems/accel/accel.o 00:04:27.694 LIB libspdk_event_accel.a 00:04:27.694 SO libspdk_event_accel.so.6.0 00:04:27.694 SYMLINK libspdk_event_accel.so 00:04:27.953 CC module/event/subsystems/bdev/bdev.o 00:04:28.211 LIB libspdk_event_bdev.a 00:04:28.211 SO libspdk_event_bdev.so.6.0 00:04:28.211 SYMLINK libspdk_event_bdev.so 00:04:28.211 CC module/event/subsystems/nbd/nbd.o 00:04:28.211 CC module/event/subsystems/scsi/scsi.o 00:04:28.211 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:04:28.211 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:04:28.211 CC module/event/subsystems/ublk/ublk.o 00:04:28.469 LIB libspdk_event_nbd.a 00:04:28.470 LIB libspdk_event_ublk.a 00:04:28.470 LIB libspdk_event_scsi.a 00:04:28.470 SO libspdk_event_nbd.so.6.0 00:04:28.470 SO libspdk_event_ublk.so.3.0 00:04:28.470 SO libspdk_event_scsi.so.6.0 00:04:28.470 SYMLINK libspdk_event_scsi.so 00:04:28.470 LIB libspdk_event_nvmf.a 00:04:28.470 SYMLINK libspdk_event_ublk.so 00:04:28.470 SYMLINK libspdk_event_nbd.so 00:04:28.470 SO libspdk_event_nvmf.so.6.0 00:04:28.470 SYMLINK libspdk_event_nvmf.so 00:04:28.728 CC module/event/subsystems/iscsi/iscsi.o 00:04:28.728 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:04:28.728 LIB libspdk_event_vhost_scsi.a 00:04:28.728 LIB libspdk_event_iscsi.a 00:04:28.728 SO libspdk_event_vhost_scsi.so.3.0 00:04:28.728 SO libspdk_event_iscsi.so.6.0 00:04:28.986 SYMLINK libspdk_event_vhost_scsi.so 00:04:28.986 SYMLINK libspdk_event_iscsi.so 00:04:28.986 SO libspdk.so.6.0 00:04:28.986 SYMLINK libspdk.so 00:04:29.244 CXX app/trace/trace.o 00:04:29.244 CC app/spdk_lspci/spdk_lspci.o 00:04:29.244 CC app/trace_record/trace_record.o 00:04:29.244 CC examples/interrupt_tgt/interrupt_tgt.o 00:04:29.244 CC app/nvmf_tgt/nvmf_main.o 00:04:29.244 CC app/iscsi_tgt/iscsi_tgt.o 00:04:29.244 CC app/spdk_tgt/spdk_tgt.o 00:04:29.244 CC examples/ioat/perf/perf.o 00:04:29.244 CC test/thread/poller_perf/poller_perf.o 00:04:29.244 CC examples/util/zipf/zipf.o 00:04:29.244 LINK spdk_lspci 00:04:29.244 LINK nvmf_tgt 00:04:29.244 LINK poller_perf 00:04:29.500 LINK interrupt_tgt 00:04:29.500 LINK spdk_tgt 00:04:29.500 LINK zipf 00:04:29.500 LINK iscsi_tgt 00:04:29.500 LINK spdk_trace_record 00:04:29.500 LINK ioat_perf 00:04:29.500 CC app/spdk_nvme_perf/perf.o 00:04:29.500 CC app/spdk_nvme_identify/identify.o 00:04:29.500 LINK spdk_trace 00:04:29.501 CC app/spdk_nvme_discover/discovery_aer.o 00:04:29.501 CC app/spdk_top/spdk_top.o 00:04:29.757 CC test/dma/test_dma/test_dma.o 00:04:29.757 CC examples/ioat/verify/verify.o 00:04:29.757 CC app/spdk_dd/spdk_dd.o 00:04:29.757 CC test/app/bdev_svc/bdev_svc.o 00:04:29.757 CC examples/thread/thread/thread_ex.o 00:04:29.757 LINK spdk_nvme_discover 00:04:29.757 CC app/fio/nvme/fio_plugin.o 00:04:29.757 LINK verify 00:04:29.757 LINK bdev_svc 00:04:30.013 LINK spdk_dd 00:04:30.013 LINK thread 00:04:30.013 CC app/fio/bdev/fio_plugin.o 00:04:30.013 LINK test_dma 00:04:30.013 CC examples/sock/hello_world/hello_sock.o 00:04:30.270 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:04:30.270 LINK spdk_nvme_perf 00:04:30.270 CC test/app/histogram_perf/histogram_perf.o 00:04:30.270 CC examples/vmd/lsvmd/lsvmd.o 00:04:30.270 LINK spdk_nvme 00:04:30.270 LINK spdk_nvme_identify 00:04:30.270 LINK histogram_perf 00:04:30.270 TEST_HEADER include/spdk/accel.h 00:04:30.270 TEST_HEADER include/spdk/accel_module.h 00:04:30.270 TEST_HEADER include/spdk/assert.h 00:04:30.270 TEST_HEADER include/spdk/barrier.h 00:04:30.270 TEST_HEADER include/spdk/base64.h 00:04:30.270 TEST_HEADER include/spdk/bdev.h 00:04:30.270 TEST_HEADER include/spdk/bdev_module.h 00:04:30.270 TEST_HEADER include/spdk/bdev_zone.h 00:04:30.270 TEST_HEADER include/spdk/bit_array.h 00:04:30.270 TEST_HEADER include/spdk/bit_pool.h 00:04:30.270 TEST_HEADER include/spdk/blob_bdev.h 00:04:30.270 TEST_HEADER include/spdk/blobfs_bdev.h 00:04:30.270 TEST_HEADER include/spdk/blobfs.h 00:04:30.270 LINK lsvmd 00:04:30.270 TEST_HEADER include/spdk/blob.h 00:04:30.270 TEST_HEADER include/spdk/conf.h 00:04:30.270 TEST_HEADER include/spdk/config.h 00:04:30.270 TEST_HEADER include/spdk/cpuset.h 00:04:30.270 LINK hello_sock 00:04:30.270 TEST_HEADER include/spdk/crc16.h 00:04:30.270 TEST_HEADER include/spdk/crc32.h 00:04:30.270 TEST_HEADER include/spdk/crc64.h 00:04:30.270 TEST_HEADER include/spdk/dif.h 00:04:30.270 TEST_HEADER include/spdk/dma.h 00:04:30.270 TEST_HEADER include/spdk/endian.h 00:04:30.270 TEST_HEADER include/spdk/env_dpdk.h 00:04:30.270 TEST_HEADER include/spdk/env.h 00:04:30.270 TEST_HEADER include/spdk/event.h 00:04:30.527 TEST_HEADER include/spdk/fd_group.h 00:04:30.527 TEST_HEADER include/spdk/fd.h 00:04:30.527 TEST_HEADER include/spdk/file.h 00:04:30.527 TEST_HEADER include/spdk/fsdev.h 00:04:30.527 TEST_HEADER include/spdk/fsdev_module.h 00:04:30.527 TEST_HEADER include/spdk/ftl.h 00:04:30.527 TEST_HEADER include/spdk/fuse_dispatcher.h 00:04:30.527 TEST_HEADER include/spdk/gpt_spec.h 00:04:30.527 TEST_HEADER include/spdk/hexlify.h 00:04:30.527 TEST_HEADER include/spdk/histogram_data.h 00:04:30.527 TEST_HEADER include/spdk/idxd.h 00:04:30.527 TEST_HEADER include/spdk/idxd_spec.h 00:04:30.527 TEST_HEADER include/spdk/init.h 00:04:30.527 TEST_HEADER include/spdk/ioat.h 00:04:30.527 TEST_HEADER include/spdk/ioat_spec.h 00:04:30.527 TEST_HEADER include/spdk/iscsi_spec.h 00:04:30.527 TEST_HEADER include/spdk/json.h 00:04:30.527 TEST_HEADER include/spdk/jsonrpc.h 00:04:30.527 TEST_HEADER include/spdk/keyring.h 00:04:30.527 TEST_HEADER include/spdk/keyring_module.h 00:04:30.527 TEST_HEADER include/spdk/likely.h 00:04:30.527 TEST_HEADER include/spdk/log.h 00:04:30.527 TEST_HEADER include/spdk/lvol.h 00:04:30.527 TEST_HEADER include/spdk/md5.h 00:04:30.527 TEST_HEADER include/spdk/memory.h 00:04:30.527 TEST_HEADER include/spdk/mmio.h 00:04:30.527 TEST_HEADER include/spdk/nbd.h 00:04:30.527 TEST_HEADER include/spdk/net.h 00:04:30.527 TEST_HEADER include/spdk/notify.h 00:04:30.527 TEST_HEADER include/spdk/nvme.h 00:04:30.527 TEST_HEADER include/spdk/nvme_intel.h 00:04:30.527 TEST_HEADER include/spdk/nvme_ocssd.h 00:04:30.527 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:04:30.527 TEST_HEADER include/spdk/nvme_spec.h 00:04:30.527 TEST_HEADER include/spdk/nvme_zns.h 00:04:30.527 TEST_HEADER include/spdk/nvmf_cmd.h 00:04:30.527 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:04:30.527 TEST_HEADER include/spdk/nvmf.h 00:04:30.527 TEST_HEADER include/spdk/nvmf_spec.h 00:04:30.527 TEST_HEADER include/spdk/nvmf_transport.h 00:04:30.527 CC test/event/event_perf/event_perf.o 00:04:30.527 TEST_HEADER include/spdk/opal.h 00:04:30.527 TEST_HEADER include/spdk/opal_spec.h 00:04:30.527 TEST_HEADER include/spdk/pci_ids.h 00:04:30.527 TEST_HEADER include/spdk/pipe.h 00:04:30.527 TEST_HEADER include/spdk/queue.h 00:04:30.527 TEST_HEADER include/spdk/reduce.h 00:04:30.527 TEST_HEADER include/spdk/rpc.h 00:04:30.527 TEST_HEADER include/spdk/scheduler.h 00:04:30.527 TEST_HEADER include/spdk/scsi.h 00:04:30.527 TEST_HEADER include/spdk/scsi_spec.h 00:04:30.527 TEST_HEADER include/spdk/sock.h 00:04:30.527 TEST_HEADER include/spdk/stdinc.h 00:04:30.527 TEST_HEADER include/spdk/string.h 00:04:30.527 TEST_HEADER include/spdk/thread.h 00:04:30.527 TEST_HEADER include/spdk/trace.h 00:04:30.527 TEST_HEADER include/spdk/trace_parser.h 00:04:30.527 TEST_HEADER include/spdk/tree.h 00:04:30.527 TEST_HEADER include/spdk/ublk.h 00:04:30.527 TEST_HEADER include/spdk/util.h 00:04:30.527 TEST_HEADER include/spdk/uuid.h 00:04:30.527 LINK spdk_bdev 00:04:30.527 TEST_HEADER include/spdk/version.h 00:04:30.527 TEST_HEADER include/spdk/vfio_user_pci.h 00:04:30.527 TEST_HEADER include/spdk/vfio_user_spec.h 00:04:30.527 TEST_HEADER include/spdk/vhost.h 00:04:30.527 CC test/env/mem_callbacks/mem_callbacks.o 00:04:30.527 TEST_HEADER include/spdk/vmd.h 00:04:30.527 TEST_HEADER include/spdk/xor.h 00:04:30.527 TEST_HEADER include/spdk/zipf.h 00:04:30.527 CXX test/cpp_headers/accel.o 00:04:30.527 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:04:30.527 LINK spdk_top 00:04:30.527 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:04:30.527 LINK nvme_fuzz 00:04:30.527 LINK event_perf 00:04:30.527 CC test/event/reactor/reactor.o 00:04:30.527 CXX test/cpp_headers/accel_module.o 00:04:30.527 CC examples/vmd/led/led.o 00:04:30.783 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:04:30.783 CC test/app/jsoncat/jsoncat.o 00:04:30.783 CXX test/cpp_headers/assert.o 00:04:30.783 LINK reactor 00:04:30.783 LINK led 00:04:30.783 CC app/vhost/vhost.o 00:04:30.783 CXX test/cpp_headers/barrier.o 00:04:30.783 LINK jsoncat 00:04:30.783 CC test/event/reactor_perf/reactor_perf.o 00:04:30.783 CXX test/cpp_headers/base64.o 00:04:30.783 CXX test/cpp_headers/bdev.o 00:04:30.783 CC examples/idxd/perf/perf.o 00:04:31.040 LINK vhost 00:04:31.040 LINK reactor_perf 00:04:31.040 LINK mem_callbacks 00:04:31.040 CXX test/cpp_headers/bdev_module.o 00:04:31.040 CC test/event/app_repeat/app_repeat.o 00:04:31.040 CC test/event/scheduler/scheduler.o 00:04:31.040 LINK vhost_fuzz 00:04:31.040 CXX test/cpp_headers/bdev_zone.o 00:04:31.040 CC examples/fsdev/hello_world/hello_fsdev.o 00:04:31.040 CXX test/cpp_headers/bit_array.o 00:04:31.040 CC test/app/stub/stub.o 00:04:31.040 LINK idxd_perf 00:04:31.040 CC test/env/vtophys/vtophys.o 00:04:31.298 LINK app_repeat 00:04:31.298 CXX test/cpp_headers/bit_pool.o 00:04:31.298 CXX test/cpp_headers/blob_bdev.o 00:04:31.298 CXX test/cpp_headers/blobfs_bdev.o 00:04:31.298 LINK scheduler 00:04:31.298 LINK stub 00:04:31.298 LINK vtophys 00:04:31.298 CXX test/cpp_headers/blobfs.o 00:04:31.298 CXX test/cpp_headers/blob.o 00:04:31.298 CC examples/accel/perf/accel_perf.o 00:04:31.298 CXX test/cpp_headers/conf.o 00:04:31.298 CXX test/cpp_headers/config.o 00:04:31.298 LINK hello_fsdev 00:04:31.298 CXX test/cpp_headers/cpuset.o 00:04:31.298 CXX test/cpp_headers/crc16.o 00:04:31.298 CXX test/cpp_headers/crc32.o 00:04:31.556 CXX test/cpp_headers/crc64.o 00:04:31.556 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:04:31.556 CXX test/cpp_headers/dif.o 00:04:31.556 CXX test/cpp_headers/dma.o 00:04:31.556 CXX test/cpp_headers/endian.o 00:04:31.556 CXX test/cpp_headers/env_dpdk.o 00:04:31.556 CXX test/cpp_headers/env.o 00:04:31.556 LINK env_dpdk_post_init 00:04:31.556 CXX test/cpp_headers/event.o 00:04:31.556 CXX test/cpp_headers/fd_group.o 00:04:31.556 CC test/env/memory/memory_ut.o 00:04:31.556 CXX test/cpp_headers/fd.o 00:04:31.556 CXX test/cpp_headers/file.o 00:04:31.816 CC examples/blob/hello_world/hello_blob.o 00:04:31.816 LINK accel_perf 00:04:31.816 CC examples/blob/cli/blobcli.o 00:04:31.816 CC test/rpc_client/rpc_client_test.o 00:04:31.816 CXX test/cpp_headers/fsdev.o 00:04:31.816 CXX test/cpp_headers/fsdev_module.o 00:04:31.816 CC test/nvme/aer/aer.o 00:04:31.816 CXX test/cpp_headers/ftl.o 00:04:31.816 CC test/accel/dif/dif.o 00:04:31.816 LINK hello_blob 00:04:31.816 LINK rpc_client_test 00:04:32.081 CXX test/cpp_headers/fuse_dispatcher.o 00:04:32.081 CC examples/nvme/hello_world/hello_world.o 00:04:32.081 CC examples/nvme/reconnect/reconnect.o 00:04:32.081 LINK iscsi_fuzz 00:04:32.081 LINK aer 00:04:32.081 CC test/env/pci/pci_ut.o 00:04:32.081 CXX test/cpp_headers/gpt_spec.o 00:04:32.081 CXX test/cpp_headers/hexlify.o 00:04:32.081 LINK hello_world 00:04:32.339 LINK blobcli 00:04:32.339 CC test/blobfs/mkfs/mkfs.o 00:04:32.339 CC test/nvme/reset/reset.o 00:04:32.339 CXX test/cpp_headers/histogram_data.o 00:04:32.339 LINK reconnect 00:04:32.339 CXX test/cpp_headers/idxd.o 00:04:32.339 LINK pci_ut 00:04:32.339 LINK mkfs 00:04:32.339 CC examples/bdev/hello_world/hello_bdev.o 00:04:32.339 CC examples/bdev/bdevperf/bdevperf.o 00:04:32.339 CC examples/nvme/nvme_manage/nvme_manage.o 00:04:32.596 CXX test/cpp_headers/idxd_spec.o 00:04:32.596 LINK reset 00:04:32.596 CXX test/cpp_headers/init.o 00:04:32.596 LINK dif 00:04:32.596 CXX test/cpp_headers/ioat.o 00:04:32.596 CC test/nvme/sgl/sgl.o 00:04:32.596 LINK hello_bdev 00:04:32.596 CXX test/cpp_headers/ioat_spec.o 00:04:32.596 CC test/nvme/e2edp/nvme_dp.o 00:04:32.596 LINK memory_ut 00:04:32.856 CXX test/cpp_headers/iscsi_spec.o 00:04:32.856 CC test/nvme/overhead/overhead.o 00:04:32.856 CC test/nvme/err_injection/err_injection.o 00:04:32.856 CC test/nvme/reserve/reserve.o 00:04:32.856 CC test/nvme/startup/startup.o 00:04:32.856 LINK sgl 00:04:32.856 CC test/nvme/simple_copy/simple_copy.o 00:04:32.856 CXX test/cpp_headers/json.o 00:04:32.856 LINK nvme_manage 00:04:32.856 LINK nvme_dp 00:04:32.856 LINK startup 00:04:32.856 LINK err_injection 00:04:33.119 LINK reserve 00:04:33.119 CXX test/cpp_headers/jsonrpc.o 00:04:33.119 LINK overhead 00:04:33.119 LINK bdevperf 00:04:33.119 LINK simple_copy 00:04:33.119 CXX test/cpp_headers/keyring.o 00:04:33.119 CC test/nvme/connect_stress/connect_stress.o 00:04:33.119 CXX test/cpp_headers/keyring_module.o 00:04:33.119 CC examples/nvme/arbitration/arbitration.o 00:04:33.119 CC examples/nvme/hotplug/hotplug.o 00:04:33.119 CC examples/nvme/cmb_copy/cmb_copy.o 00:04:33.119 CC examples/nvme/abort/abort.o 00:04:33.119 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:04:33.119 CXX test/cpp_headers/likely.o 00:04:33.119 LINK connect_stress 00:04:33.378 CC test/nvme/boot_partition/boot_partition.o 00:04:33.378 CXX test/cpp_headers/log.o 00:04:33.378 CC test/nvme/compliance/nvme_compliance.o 00:04:33.378 LINK cmb_copy 00:04:33.378 LINK arbitration 00:04:33.378 LINK hotplug 00:04:33.378 LINK pmr_persistence 00:04:33.378 LINK boot_partition 00:04:33.378 CC test/nvme/fused_ordering/fused_ordering.o 00:04:33.378 CXX test/cpp_headers/lvol.o 00:04:33.378 CXX test/cpp_headers/md5.o 00:04:33.378 CC test/lvol/esnap/esnap.o 00:04:33.378 CXX test/cpp_headers/memory.o 00:04:33.378 CXX test/cpp_headers/mmio.o 00:04:33.635 LINK abort 00:04:33.635 LINK fused_ordering 00:04:33.635 CC test/nvme/doorbell_aers/doorbell_aers.o 00:04:33.635 CXX test/cpp_headers/nbd.o 00:04:33.635 LINK nvme_compliance 00:04:33.635 CXX test/cpp_headers/net.o 00:04:33.635 CC test/bdev/bdevio/bdevio.o 00:04:33.635 CXX test/cpp_headers/notify.o 00:04:33.635 CXX test/cpp_headers/nvme.o 00:04:33.635 CC test/nvme/fdp/fdp.o 00:04:33.635 CC test/nvme/cuse/cuse.o 00:04:33.892 LINK doorbell_aers 00:04:33.892 CXX test/cpp_headers/nvme_intel.o 00:04:33.892 CXX test/cpp_headers/nvme_ocssd.o 00:04:33.892 CC examples/nvmf/nvmf/nvmf.o 00:04:33.892 CXX test/cpp_headers/nvme_ocssd_spec.o 00:04:33.892 CXX test/cpp_headers/nvme_spec.o 00:04:33.892 CXX test/cpp_headers/nvme_zns.o 00:04:33.892 CXX test/cpp_headers/nvmf_cmd.o 00:04:33.892 CXX test/cpp_headers/nvmf_fc_spec.o 00:04:33.892 CXX test/cpp_headers/nvmf.o 00:04:33.892 LINK bdevio 00:04:33.892 LINK fdp 00:04:33.892 CXX test/cpp_headers/nvmf_spec.o 00:04:34.151 LINK nvmf 00:04:34.151 CXX test/cpp_headers/nvmf_transport.o 00:04:34.151 CXX test/cpp_headers/opal.o 00:04:34.151 CXX test/cpp_headers/opal_spec.o 00:04:34.151 CXX test/cpp_headers/pci_ids.o 00:04:34.151 CXX test/cpp_headers/pipe.o 00:04:34.151 CXX test/cpp_headers/queue.o 00:04:34.151 CXX test/cpp_headers/reduce.o 00:04:34.151 CXX test/cpp_headers/rpc.o 00:04:34.151 CXX test/cpp_headers/scheduler.o 00:04:34.151 CXX test/cpp_headers/scsi.o 00:04:34.151 CXX test/cpp_headers/scsi_spec.o 00:04:34.151 CXX test/cpp_headers/sock.o 00:04:34.151 CXX test/cpp_headers/stdinc.o 00:04:34.151 CXX test/cpp_headers/string.o 00:04:34.151 CXX test/cpp_headers/thread.o 00:04:34.410 CXX test/cpp_headers/trace.o 00:04:34.410 CXX test/cpp_headers/trace_parser.o 00:04:34.410 CXX test/cpp_headers/tree.o 00:04:34.410 CXX test/cpp_headers/ublk.o 00:04:34.410 CXX test/cpp_headers/util.o 00:04:34.410 CXX test/cpp_headers/uuid.o 00:04:34.410 CXX test/cpp_headers/version.o 00:04:34.410 CXX test/cpp_headers/vfio_user_pci.o 00:04:34.410 CXX test/cpp_headers/vfio_user_spec.o 00:04:34.410 CXX test/cpp_headers/vhost.o 00:04:34.410 CXX test/cpp_headers/vmd.o 00:04:34.410 CXX test/cpp_headers/xor.o 00:04:34.410 CXX test/cpp_headers/zipf.o 00:04:34.671 LINK cuse 00:04:38.861 LINK esnap 00:04:38.861 00:04:38.861 real 1m7.089s 00:04:38.861 user 6m5.198s 00:04:38.861 sys 1m3.578s 00:04:38.861 04:21:35 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:04:38.862 ************************************ 00:04:38.862 END TEST make 00:04:38.862 ************************************ 00:04:38.862 04:21:35 make -- common/autotest_common.sh@10 -- $ set +x 00:04:38.862 04:21:35 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:04:38.862 04:21:35 -- pm/common@29 -- $ signal_monitor_resources TERM 00:04:38.862 04:21:35 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:04:38.862 04:21:35 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:38.862 04:21:35 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:04:38.862 04:21:35 -- pm/common@44 -- $ pid=5079 00:04:38.862 04:21:35 -- pm/common@50 -- $ kill -TERM 5079 00:04:38.862 04:21:35 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:38.862 04:21:35 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:04:38.862 04:21:35 -- pm/common@44 -- $ pid=5081 00:04:38.862 04:21:35 -- pm/common@50 -- $ kill -TERM 5081 00:04:38.862 04:21:35 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:04:38.862 04:21:35 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:04:39.121 04:21:35 -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:39.121 04:21:35 -- common/autotest_common.sh@1693 -- # lcov --version 00:04:39.121 04:21:35 -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:39.121 04:21:35 -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:39.121 04:21:35 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:39.121 04:21:35 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:39.121 04:21:35 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:39.121 04:21:35 -- scripts/common.sh@336 -- # IFS=.-: 00:04:39.121 04:21:35 -- scripts/common.sh@336 -- # read -ra ver1 00:04:39.121 04:21:35 -- scripts/common.sh@337 -- # IFS=.-: 00:04:39.121 04:21:35 -- scripts/common.sh@337 -- # read -ra ver2 00:04:39.121 04:21:35 -- scripts/common.sh@338 -- # local 'op=<' 00:04:39.121 04:21:35 -- scripts/common.sh@340 -- # ver1_l=2 00:04:39.121 04:21:35 -- scripts/common.sh@341 -- # ver2_l=1 00:04:39.121 04:21:35 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:39.121 04:21:35 -- scripts/common.sh@344 -- # case "$op" in 00:04:39.121 04:21:35 -- scripts/common.sh@345 -- # : 1 00:04:39.121 04:21:35 -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:39.121 04:21:35 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:39.121 04:21:35 -- scripts/common.sh@365 -- # decimal 1 00:04:39.121 04:21:35 -- scripts/common.sh@353 -- # local d=1 00:04:39.121 04:21:35 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:39.121 04:21:35 -- scripts/common.sh@355 -- # echo 1 00:04:39.121 04:21:35 -- scripts/common.sh@365 -- # ver1[v]=1 00:04:39.121 04:21:35 -- scripts/common.sh@366 -- # decimal 2 00:04:39.121 04:21:35 -- scripts/common.sh@353 -- # local d=2 00:04:39.121 04:21:35 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:39.121 04:21:35 -- scripts/common.sh@355 -- # echo 2 00:04:39.121 04:21:35 -- scripts/common.sh@366 -- # ver2[v]=2 00:04:39.121 04:21:35 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:39.121 04:21:35 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:39.121 04:21:35 -- scripts/common.sh@368 -- # return 0 00:04:39.121 04:21:35 -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:39.121 04:21:35 -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:39.121 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:39.121 --rc genhtml_branch_coverage=1 00:04:39.121 --rc genhtml_function_coverage=1 00:04:39.121 --rc genhtml_legend=1 00:04:39.121 --rc geninfo_all_blocks=1 00:04:39.121 --rc geninfo_unexecuted_blocks=1 00:04:39.121 00:04:39.121 ' 00:04:39.121 04:21:35 -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:39.121 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:39.121 --rc genhtml_branch_coverage=1 00:04:39.121 --rc genhtml_function_coverage=1 00:04:39.121 --rc genhtml_legend=1 00:04:39.121 --rc geninfo_all_blocks=1 00:04:39.121 --rc geninfo_unexecuted_blocks=1 00:04:39.121 00:04:39.121 ' 00:04:39.121 04:21:35 -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:39.121 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:39.121 --rc genhtml_branch_coverage=1 00:04:39.121 --rc genhtml_function_coverage=1 00:04:39.121 --rc genhtml_legend=1 00:04:39.121 --rc geninfo_all_blocks=1 00:04:39.121 --rc geninfo_unexecuted_blocks=1 00:04:39.121 00:04:39.121 ' 00:04:39.121 04:21:35 -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:39.121 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:39.121 --rc genhtml_branch_coverage=1 00:04:39.121 --rc genhtml_function_coverage=1 00:04:39.121 --rc genhtml_legend=1 00:04:39.121 --rc geninfo_all_blocks=1 00:04:39.121 --rc geninfo_unexecuted_blocks=1 00:04:39.121 00:04:39.121 ' 00:04:39.121 04:21:35 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:39.121 04:21:35 -- nvmf/common.sh@7 -- # uname -s 00:04:39.121 04:21:35 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:39.121 04:21:35 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:39.121 04:21:35 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:39.121 04:21:35 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:39.121 04:21:35 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:39.121 04:21:35 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:39.121 04:21:35 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:39.121 04:21:35 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:39.121 04:21:35 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:39.121 04:21:35 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:39.121 04:21:35 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:13189ee1-9cae-47b4-9e20-44b2397fdd91 00:04:39.121 04:21:35 -- nvmf/common.sh@18 -- # NVME_HOSTID=13189ee1-9cae-47b4-9e20-44b2397fdd91 00:04:39.121 04:21:35 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:39.121 04:21:35 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:39.121 04:21:35 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:39.121 04:21:35 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:39.121 04:21:35 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:39.121 04:21:35 -- scripts/common.sh@15 -- # shopt -s extglob 00:04:39.121 04:21:35 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:39.122 04:21:35 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:39.122 04:21:35 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:39.122 04:21:35 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:39.122 04:21:35 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:39.122 04:21:35 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:39.122 04:21:35 -- paths/export.sh@5 -- # export PATH 00:04:39.122 04:21:35 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:39.122 04:21:35 -- nvmf/common.sh@51 -- # : 0 00:04:39.122 04:21:35 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:39.122 04:21:35 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:39.122 04:21:35 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:39.122 04:21:35 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:39.122 04:21:35 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:39.122 04:21:35 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:39.122 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:39.122 04:21:35 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:39.122 04:21:35 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:39.122 04:21:35 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:39.122 04:21:35 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:04:39.122 04:21:35 -- spdk/autotest.sh@32 -- # uname -s 00:04:39.122 04:21:35 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:04:39.122 04:21:35 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:04:39.122 04:21:35 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:39.122 04:21:35 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:04:39.122 04:21:35 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:39.122 04:21:35 -- spdk/autotest.sh@44 -- # modprobe nbd 00:04:39.122 04:21:35 -- spdk/autotest.sh@46 -- # type -P udevadm 00:04:39.122 04:21:35 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:04:39.122 04:21:35 -- spdk/autotest.sh@48 -- # udevadm_pid=54266 00:04:39.122 04:21:35 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:04:39.122 04:21:35 -- pm/common@17 -- # local monitor 00:04:39.122 04:21:35 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:39.122 04:21:35 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:39.122 04:21:35 -- pm/common@25 -- # sleep 1 00:04:39.122 04:21:35 -- pm/common@21 -- # date +%s 00:04:39.122 04:21:35 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:04:39.122 04:21:35 -- pm/common@21 -- # date +%s 00:04:39.122 04:21:35 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732681295 00:04:39.122 04:21:35 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732681295 00:04:39.122 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732681295_collect-vmstat.pm.log 00:04:39.122 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732681295_collect-cpu-load.pm.log 00:04:40.067 04:21:36 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:04:40.067 04:21:36 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:04:40.067 04:21:36 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:40.067 04:21:36 -- common/autotest_common.sh@10 -- # set +x 00:04:40.067 04:21:36 -- spdk/autotest.sh@59 -- # create_test_list 00:04:40.067 04:21:36 -- common/autotest_common.sh@752 -- # xtrace_disable 00:04:40.067 04:21:36 -- common/autotest_common.sh@10 -- # set +x 00:04:40.067 04:21:36 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:04:40.067 04:21:36 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:04:40.067 04:21:36 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:04:40.067 04:21:36 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:04:40.067 04:21:36 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:04:40.067 04:21:36 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:04:40.067 04:21:36 -- common/autotest_common.sh@1457 -- # uname 00:04:40.067 04:21:36 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:04:40.067 04:21:36 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:04:40.067 04:21:36 -- common/autotest_common.sh@1477 -- # uname 00:04:40.067 04:21:36 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:04:40.067 04:21:36 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:04:40.067 04:21:36 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:04:40.333 lcov: LCOV version 1.15 00:04:40.333 04:21:36 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:04:55.231 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:04:55.231 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:05:10.154 04:22:06 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:05:10.154 04:22:06 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:10.154 04:22:06 -- common/autotest_common.sh@10 -- # set +x 00:05:10.154 04:22:06 -- spdk/autotest.sh@78 -- # rm -f 00:05:10.154 04:22:06 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:10.154 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:10.728 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:05:10.728 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:05:10.728 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:05:10.728 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:05:10.728 04:22:07 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:05:10.728 04:22:07 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:05:10.728 04:22:07 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:05:10.728 04:22:07 -- common/autotest_common.sh@1658 -- # local nvme bdf 00:05:10.728 04:22:07 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:05:10.728 04:22:07 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:05:10.728 04:22:07 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:05:10.728 04:22:07 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:10.728 04:22:07 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:10.728 04:22:07 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:05:10.728 04:22:07 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:05:10.728 04:22:07 -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:05:10.728 04:22:07 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:05:10.728 04:22:07 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:10.728 04:22:07 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:05:10.728 04:22:07 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n1 00:05:10.728 04:22:07 -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:05:10.728 04:22:07 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:05:10.728 04:22:07 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:10.728 04:22:07 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:05:10.728 04:22:07 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n2 00:05:10.728 04:22:07 -- common/autotest_common.sh@1650 -- # local device=nvme2n2 00:05:10.728 04:22:07 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:05:10.728 04:22:07 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:10.728 04:22:07 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:05:10.728 04:22:07 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n3 00:05:10.728 04:22:07 -- common/autotest_common.sh@1650 -- # local device=nvme2n3 00:05:10.728 04:22:07 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:05:10.728 04:22:07 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:10.728 04:22:07 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:05:10.728 04:22:07 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3c3n1 00:05:10.728 04:22:07 -- common/autotest_common.sh@1650 -- # local device=nvme3c3n1 00:05:10.728 04:22:07 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:05:10.728 04:22:07 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:10.728 04:22:07 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:05:10.728 04:22:07 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3n1 00:05:10.728 04:22:07 -- common/autotest_common.sh@1650 -- # local device=nvme3n1 00:05:10.728 04:22:07 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:05:10.728 04:22:07 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:10.728 04:22:07 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:05:10.728 04:22:07 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:10.728 04:22:07 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:10.728 04:22:07 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:05:10.728 04:22:07 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:05:10.728 04:22:07 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:05:10.728 No valid GPT data, bailing 00:05:10.728 04:22:07 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:10.728 04:22:07 -- scripts/common.sh@394 -- # pt= 00:05:10.728 04:22:07 -- scripts/common.sh@395 -- # return 1 00:05:10.728 04:22:07 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:05:10.728 1+0 records in 00:05:10.728 1+0 records out 00:05:10.728 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0316547 s, 33.1 MB/s 00:05:10.728 04:22:07 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:10.728 04:22:07 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:10.728 04:22:07 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:05:10.728 04:22:07 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:05:10.728 04:22:07 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:05:10.728 No valid GPT data, bailing 00:05:10.728 04:22:07 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:05:10.728 04:22:07 -- scripts/common.sh@394 -- # pt= 00:05:10.728 04:22:07 -- scripts/common.sh@395 -- # return 1 00:05:10.728 04:22:07 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:05:10.728 1+0 records in 00:05:10.728 1+0 records out 00:05:10.728 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00564376 s, 186 MB/s 00:05:10.728 04:22:07 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:10.728 04:22:07 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:10.728 04:22:07 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n1 00:05:10.728 04:22:07 -- scripts/common.sh@381 -- # local block=/dev/nvme2n1 pt 00:05:10.728 04:22:07 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n1 00:05:10.990 No valid GPT data, bailing 00:05:10.990 04:22:07 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:05:10.990 04:22:07 -- scripts/common.sh@394 -- # pt= 00:05:10.990 04:22:07 -- scripts/common.sh@395 -- # return 1 00:05:10.990 04:22:07 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n1 bs=1M count=1 00:05:10.990 1+0 records in 00:05:10.990 1+0 records out 00:05:10.990 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00560697 s, 187 MB/s 00:05:10.990 04:22:07 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:10.990 04:22:07 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:10.990 04:22:07 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n2 00:05:10.990 04:22:07 -- scripts/common.sh@381 -- # local block=/dev/nvme2n2 pt 00:05:10.990 04:22:07 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n2 00:05:10.990 No valid GPT data, bailing 00:05:10.990 04:22:07 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n2 00:05:10.990 04:22:07 -- scripts/common.sh@394 -- # pt= 00:05:10.990 04:22:07 -- scripts/common.sh@395 -- # return 1 00:05:10.990 04:22:07 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n2 bs=1M count=1 00:05:10.990 1+0 records in 00:05:10.990 1+0 records out 00:05:10.990 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0060878 s, 172 MB/s 00:05:10.990 04:22:07 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:10.990 04:22:07 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:10.990 04:22:07 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n3 00:05:10.990 04:22:07 -- scripts/common.sh@381 -- # local block=/dev/nvme2n3 pt 00:05:10.990 04:22:07 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n3 00:05:10.990 No valid GPT data, bailing 00:05:10.990 04:22:07 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n3 00:05:10.990 04:22:07 -- scripts/common.sh@394 -- # pt= 00:05:10.990 04:22:07 -- scripts/common.sh@395 -- # return 1 00:05:10.990 04:22:07 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n3 bs=1M count=1 00:05:10.990 1+0 records in 00:05:10.990 1+0 records out 00:05:10.990 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00518338 s, 202 MB/s 00:05:10.990 04:22:07 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:10.990 04:22:07 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:10.990 04:22:07 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme3n1 00:05:10.990 04:22:07 -- scripts/common.sh@381 -- # local block=/dev/nvme3n1 pt 00:05:10.990 04:22:07 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n1 00:05:11.251 No valid GPT data, bailing 00:05:11.251 04:22:07 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:05:11.251 04:22:07 -- scripts/common.sh@394 -- # pt= 00:05:11.251 04:22:07 -- scripts/common.sh@395 -- # return 1 00:05:11.251 04:22:07 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme3n1 bs=1M count=1 00:05:11.251 1+0 records in 00:05:11.251 1+0 records out 00:05:11.251 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0059047 s, 178 MB/s 00:05:11.251 04:22:07 -- spdk/autotest.sh@105 -- # sync 00:05:11.251 04:22:07 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:05:11.251 04:22:07 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:05:11.251 04:22:07 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:05:13.167 04:22:09 -- spdk/autotest.sh@111 -- # uname -s 00:05:13.167 04:22:09 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:05:13.167 04:22:09 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:05:13.167 04:22:09 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:13.428 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:14.001 Hugepages 00:05:14.001 node hugesize free / total 00:05:14.001 node0 1048576kB 0 / 0 00:05:14.001 node0 2048kB 0 / 0 00:05:14.001 00:05:14.002 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:14.002 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:05:14.002 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:05:14.002 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:05:14.262 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:05:14.262 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:05:14.262 04:22:10 -- spdk/autotest.sh@117 -- # uname -s 00:05:14.262 04:22:10 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:05:14.262 04:22:10 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:05:14.262 04:22:10 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:14.837 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:15.409 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:15.409 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:05:15.409 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:15.409 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:05:15.409 04:22:11 -- common/autotest_common.sh@1517 -- # sleep 1 00:05:16.352 04:22:12 -- common/autotest_common.sh@1518 -- # bdfs=() 00:05:16.352 04:22:12 -- common/autotest_common.sh@1518 -- # local bdfs 00:05:16.352 04:22:12 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:05:16.352 04:22:12 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:05:16.352 04:22:12 -- common/autotest_common.sh@1498 -- # bdfs=() 00:05:16.352 04:22:12 -- common/autotest_common.sh@1498 -- # local bdfs 00:05:16.352 04:22:12 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:16.352 04:22:12 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:05:16.352 04:22:12 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:16.352 04:22:12 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:05:16.352 04:22:12 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:05:16.352 04:22:12 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:16.918 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:16.918 Waiting for block devices as requested 00:05:16.918 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:05:16.918 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:05:17.176 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:05:17.176 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:05:22.498 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:05:22.498 04:22:18 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:05:22.498 04:22:18 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:05:22.498 04:22:18 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:22.498 04:22:18 -- common/autotest_common.sh@1487 -- # grep 0000:00:10.0/nvme/nvme 00:05:22.498 04:22:18 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:05:22.498 04:22:18 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:05:22.498 04:22:18 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:05:22.498 04:22:18 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:05:22.498 04:22:18 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:05:22.498 04:22:18 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:05:22.498 04:22:18 -- common/autotest_common.sh@1531 -- # grep oacs 00:05:22.498 04:22:18 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:05:22.498 04:22:18 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:05:22.498 04:22:18 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:05:22.498 04:22:18 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:05:22.498 04:22:18 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:05:22.498 04:22:18 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:05:22.498 04:22:18 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:05:22.498 04:22:18 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:22.498 04:22:18 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:05:22.498 04:22:18 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:05:22.498 04:22:18 -- common/autotest_common.sh@1543 -- # continue 00:05:22.498 04:22:18 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:05:22.498 04:22:18 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:05:22.498 04:22:18 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:22.498 04:22:18 -- common/autotest_common.sh@1487 -- # grep 0000:00:11.0/nvme/nvme 00:05:22.499 04:22:18 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:05:22.499 04:22:18 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:05:22.499 04:22:18 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:05:22.499 04:22:18 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:05:22.499 04:22:18 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:05:22.499 04:22:18 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:05:22.499 04:22:18 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:05:22.499 04:22:18 -- common/autotest_common.sh@1531 -- # grep oacs 00:05:22.499 04:22:18 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:05:22.499 04:22:18 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:05:22.499 04:22:18 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:05:22.499 04:22:18 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:05:22.499 04:22:18 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:05:22.499 04:22:18 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:22.499 04:22:18 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:05:22.499 04:22:18 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:05:22.499 04:22:18 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:05:22.499 04:22:18 -- common/autotest_common.sh@1543 -- # continue 00:05:22.499 04:22:18 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:05:22.499 04:22:18 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:12.0 00:05:22.499 04:22:18 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:22.499 04:22:18 -- common/autotest_common.sh@1487 -- # grep 0000:00:12.0/nvme/nvme 00:05:22.499 04:22:18 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:05:22.499 04:22:18 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 ]] 00:05:22.499 04:22:18 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:05:22.499 04:22:18 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme2 00:05:22.499 04:22:18 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme2 00:05:22.499 04:22:18 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme2 ]] 00:05:22.499 04:22:18 -- common/autotest_common.sh@1531 -- # grep oacs 00:05:22.499 04:22:18 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme2 00:05:22.499 04:22:18 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:05:22.499 04:22:18 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:05:22.499 04:22:18 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:05:22.499 04:22:18 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:05:22.499 04:22:18 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:22.499 04:22:18 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:05:22.499 04:22:18 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme2 00:05:22.499 04:22:18 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:05:22.499 04:22:18 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:05:22.499 04:22:18 -- common/autotest_common.sh@1543 -- # continue 00:05:22.499 04:22:18 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:05:22.499 04:22:18 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:13.0 00:05:22.499 04:22:18 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:22.499 04:22:18 -- common/autotest_common.sh@1487 -- # grep 0000:00:13.0/nvme/nvme 00:05:22.499 04:22:18 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:05:22.499 04:22:18 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 ]] 00:05:22.499 04:22:18 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:05:22.499 04:22:18 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme3 00:05:22.499 04:22:18 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme3 00:05:22.499 04:22:18 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme3 ]] 00:05:22.499 04:22:18 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:05:22.499 04:22:18 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme3 00:05:22.499 04:22:18 -- common/autotest_common.sh@1531 -- # grep oacs 00:05:22.499 04:22:18 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:05:22.499 04:22:18 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:05:22.499 04:22:18 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:05:22.499 04:22:18 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme3 00:05:22.499 04:22:18 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:22.499 04:22:18 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:05:22.499 04:22:18 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:05:22.499 04:22:18 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:05:22.499 04:22:18 -- common/autotest_common.sh@1543 -- # continue 00:05:22.499 04:22:18 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:05:22.499 04:22:18 -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:22.499 04:22:18 -- common/autotest_common.sh@10 -- # set +x 00:05:22.499 04:22:18 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:05:22.499 04:22:18 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:22.499 04:22:18 -- common/autotest_common.sh@10 -- # set +x 00:05:22.499 04:22:18 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:23.075 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:23.334 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:23.334 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:05:23.334 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:23.592 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:05:23.592 04:22:19 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:05:23.592 04:22:19 -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:23.592 04:22:19 -- common/autotest_common.sh@10 -- # set +x 00:05:23.592 04:22:20 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:05:23.592 04:22:20 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:05:23.592 04:22:20 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:05:23.592 04:22:20 -- common/autotest_common.sh@1563 -- # bdfs=() 00:05:23.592 04:22:20 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:05:23.592 04:22:20 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:05:23.592 04:22:20 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:05:23.592 04:22:20 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:05:23.592 04:22:20 -- common/autotest_common.sh@1498 -- # bdfs=() 00:05:23.592 04:22:20 -- common/autotest_common.sh@1498 -- # local bdfs 00:05:23.592 04:22:20 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:23.592 04:22:20 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:23.592 04:22:20 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:05:23.592 04:22:20 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:05:23.592 04:22:20 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:05:23.592 04:22:20 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:05:23.592 04:22:20 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:05:23.592 04:22:20 -- common/autotest_common.sh@1566 -- # device=0x0010 00:05:23.592 04:22:20 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:23.592 04:22:20 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:05:23.592 04:22:20 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:05:23.592 04:22:20 -- common/autotest_common.sh@1566 -- # device=0x0010 00:05:23.592 04:22:20 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:23.592 04:22:20 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:05:23.592 04:22:20 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:12.0/device 00:05:23.592 04:22:20 -- common/autotest_common.sh@1566 -- # device=0x0010 00:05:23.592 04:22:20 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:23.592 04:22:20 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:05:23.592 04:22:20 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:13.0/device 00:05:23.592 04:22:20 -- common/autotest_common.sh@1566 -- # device=0x0010 00:05:23.592 04:22:20 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:23.592 04:22:20 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:05:23.592 04:22:20 -- common/autotest_common.sh@1572 -- # return 0 00:05:23.592 04:22:20 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:05:23.593 04:22:20 -- common/autotest_common.sh@1580 -- # return 0 00:05:23.593 04:22:20 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:05:23.593 04:22:20 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:05:23.593 04:22:20 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:23.593 04:22:20 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:23.593 04:22:20 -- spdk/autotest.sh@149 -- # timing_enter lib 00:05:23.593 04:22:20 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:23.593 04:22:20 -- common/autotest_common.sh@10 -- # set +x 00:05:23.593 04:22:20 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:05:23.593 04:22:20 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:23.593 04:22:20 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:23.593 04:22:20 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:23.593 04:22:20 -- common/autotest_common.sh@10 -- # set +x 00:05:23.593 ************************************ 00:05:23.593 START TEST env 00:05:23.593 ************************************ 00:05:23.593 04:22:20 env -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:23.851 * Looking for test storage... 00:05:23.851 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:05:23.851 04:22:20 env -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:23.851 04:22:20 env -- common/autotest_common.sh@1693 -- # lcov --version 00:05:23.851 04:22:20 env -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:23.851 04:22:20 env -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:23.851 04:22:20 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:23.851 04:22:20 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:23.851 04:22:20 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:23.851 04:22:20 env -- scripts/common.sh@336 -- # IFS=.-: 00:05:23.851 04:22:20 env -- scripts/common.sh@336 -- # read -ra ver1 00:05:23.851 04:22:20 env -- scripts/common.sh@337 -- # IFS=.-: 00:05:23.851 04:22:20 env -- scripts/common.sh@337 -- # read -ra ver2 00:05:23.851 04:22:20 env -- scripts/common.sh@338 -- # local 'op=<' 00:05:23.851 04:22:20 env -- scripts/common.sh@340 -- # ver1_l=2 00:05:23.851 04:22:20 env -- scripts/common.sh@341 -- # ver2_l=1 00:05:23.851 04:22:20 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:23.851 04:22:20 env -- scripts/common.sh@344 -- # case "$op" in 00:05:23.851 04:22:20 env -- scripts/common.sh@345 -- # : 1 00:05:23.851 04:22:20 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:23.851 04:22:20 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:23.851 04:22:20 env -- scripts/common.sh@365 -- # decimal 1 00:05:23.851 04:22:20 env -- scripts/common.sh@353 -- # local d=1 00:05:23.851 04:22:20 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:23.851 04:22:20 env -- scripts/common.sh@355 -- # echo 1 00:05:23.851 04:22:20 env -- scripts/common.sh@365 -- # ver1[v]=1 00:05:23.851 04:22:20 env -- scripts/common.sh@366 -- # decimal 2 00:05:23.851 04:22:20 env -- scripts/common.sh@353 -- # local d=2 00:05:23.851 04:22:20 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:23.851 04:22:20 env -- scripts/common.sh@355 -- # echo 2 00:05:23.851 04:22:20 env -- scripts/common.sh@366 -- # ver2[v]=2 00:05:23.851 04:22:20 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:23.851 04:22:20 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:23.851 04:22:20 env -- scripts/common.sh@368 -- # return 0 00:05:23.851 04:22:20 env -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:23.851 04:22:20 env -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:23.851 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.851 --rc genhtml_branch_coverage=1 00:05:23.851 --rc genhtml_function_coverage=1 00:05:23.851 --rc genhtml_legend=1 00:05:23.851 --rc geninfo_all_blocks=1 00:05:23.851 --rc geninfo_unexecuted_blocks=1 00:05:23.851 00:05:23.851 ' 00:05:23.851 04:22:20 env -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:23.851 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.851 --rc genhtml_branch_coverage=1 00:05:23.851 --rc genhtml_function_coverage=1 00:05:23.851 --rc genhtml_legend=1 00:05:23.851 --rc geninfo_all_blocks=1 00:05:23.851 --rc geninfo_unexecuted_blocks=1 00:05:23.851 00:05:23.851 ' 00:05:23.851 04:22:20 env -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:23.851 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.851 --rc genhtml_branch_coverage=1 00:05:23.851 --rc genhtml_function_coverage=1 00:05:23.851 --rc genhtml_legend=1 00:05:23.851 --rc geninfo_all_blocks=1 00:05:23.851 --rc geninfo_unexecuted_blocks=1 00:05:23.851 00:05:23.851 ' 00:05:23.851 04:22:20 env -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:23.851 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.851 --rc genhtml_branch_coverage=1 00:05:23.851 --rc genhtml_function_coverage=1 00:05:23.851 --rc genhtml_legend=1 00:05:23.851 --rc geninfo_all_blocks=1 00:05:23.851 --rc geninfo_unexecuted_blocks=1 00:05:23.851 00:05:23.851 ' 00:05:23.851 04:22:20 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:23.851 04:22:20 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:23.851 04:22:20 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:23.851 04:22:20 env -- common/autotest_common.sh@10 -- # set +x 00:05:23.851 ************************************ 00:05:23.851 START TEST env_memory 00:05:23.851 ************************************ 00:05:23.851 04:22:20 env.env_memory -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:23.851 00:05:23.851 00:05:23.851 CUnit - A unit testing framework for C - Version 2.1-3 00:05:23.851 http://cunit.sourceforge.net/ 00:05:23.851 00:05:23.851 00:05:23.851 Suite: memory 00:05:23.851 Test: alloc and free memory map ...[2024-11-27 04:22:20.341663] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:23.852 passed 00:05:23.852 Test: mem map translation ...[2024-11-27 04:22:20.380224] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:23.852 [2024-11-27 04:22:20.380271] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:23.852 [2024-11-27 04:22:20.380329] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:23.852 [2024-11-27 04:22:20.380343] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:24.110 passed 00:05:24.110 Test: mem map registration ...[2024-11-27 04:22:20.448258] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:05:24.110 [2024-11-27 04:22:20.448310] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:05:24.110 passed 00:05:24.110 Test: mem map adjacent registrations ...passed 00:05:24.110 00:05:24.110 Run Summary: Type Total Ran Passed Failed Inactive 00:05:24.110 suites 1 1 n/a 0 0 00:05:24.110 tests 4 4 4 0 0 00:05:24.110 asserts 152 152 152 0 n/a 00:05:24.110 00:05:24.110 Elapsed time = 0.233 seconds 00:05:24.110 00:05:24.110 real 0m0.267s 00:05:24.110 user 0m0.239s 00:05:24.110 sys 0m0.022s 00:05:24.110 ************************************ 00:05:24.110 END TEST env_memory 00:05:24.110 ************************************ 00:05:24.110 04:22:20 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:24.110 04:22:20 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:24.110 04:22:20 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:24.110 04:22:20 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:24.110 04:22:20 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:24.110 04:22:20 env -- common/autotest_common.sh@10 -- # set +x 00:05:24.110 ************************************ 00:05:24.110 START TEST env_vtophys 00:05:24.110 ************************************ 00:05:24.110 04:22:20 env.env_vtophys -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:24.110 EAL: lib.eal log level changed from notice to debug 00:05:24.110 EAL: Detected lcore 0 as core 0 on socket 0 00:05:24.110 EAL: Detected lcore 1 as core 0 on socket 0 00:05:24.110 EAL: Detected lcore 2 as core 0 on socket 0 00:05:24.110 EAL: Detected lcore 3 as core 0 on socket 0 00:05:24.110 EAL: Detected lcore 4 as core 0 on socket 0 00:05:24.110 EAL: Detected lcore 5 as core 0 on socket 0 00:05:24.110 EAL: Detected lcore 6 as core 0 on socket 0 00:05:24.110 EAL: Detected lcore 7 as core 0 on socket 0 00:05:24.110 EAL: Detected lcore 8 as core 0 on socket 0 00:05:24.111 EAL: Detected lcore 9 as core 0 on socket 0 00:05:24.111 EAL: Maximum logical cores by configuration: 128 00:05:24.111 EAL: Detected CPU lcores: 10 00:05:24.111 EAL: Detected NUMA nodes: 1 00:05:24.111 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:05:24.111 EAL: Detected shared linkage of DPDK 00:05:24.111 EAL: No shared files mode enabled, IPC will be disabled 00:05:24.111 EAL: Selected IOVA mode 'PA' 00:05:24.111 EAL: Probing VFIO support... 00:05:24.111 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:24.111 EAL: VFIO modules not loaded, skipping VFIO support... 00:05:24.111 EAL: Ask a virtual area of 0x2e000 bytes 00:05:24.111 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:24.111 EAL: Setting up physically contiguous memory... 00:05:24.111 EAL: Setting maximum number of open files to 524288 00:05:24.111 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:24.111 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:24.111 EAL: Ask a virtual area of 0x61000 bytes 00:05:24.111 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:24.111 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:24.111 EAL: Ask a virtual area of 0x400000000 bytes 00:05:24.111 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:24.111 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:24.111 EAL: Ask a virtual area of 0x61000 bytes 00:05:24.111 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:24.111 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:24.111 EAL: Ask a virtual area of 0x400000000 bytes 00:05:24.111 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:24.111 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:24.111 EAL: Ask a virtual area of 0x61000 bytes 00:05:24.111 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:24.111 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:24.111 EAL: Ask a virtual area of 0x400000000 bytes 00:05:24.111 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:24.111 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:24.111 EAL: Ask a virtual area of 0x61000 bytes 00:05:24.111 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:24.111 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:24.111 EAL: Ask a virtual area of 0x400000000 bytes 00:05:24.111 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:24.111 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:24.111 EAL: Hugepages will be freed exactly as allocated. 00:05:24.111 EAL: No shared files mode enabled, IPC is disabled 00:05:24.111 EAL: No shared files mode enabled, IPC is disabled 00:05:24.369 EAL: TSC frequency is ~2600000 KHz 00:05:24.369 EAL: Main lcore 0 is ready (tid=7fd81451ea40;cpuset=[0]) 00:05:24.369 EAL: Trying to obtain current memory policy. 00:05:24.369 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:24.369 EAL: Restoring previous memory policy: 0 00:05:24.369 EAL: request: mp_malloc_sync 00:05:24.369 EAL: No shared files mode enabled, IPC is disabled 00:05:24.369 EAL: Heap on socket 0 was expanded by 2MB 00:05:24.369 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:24.369 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:24.369 EAL: Mem event callback 'spdk:(nil)' registered 00:05:24.369 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:05:24.369 00:05:24.369 00:05:24.369 CUnit - A unit testing framework for C - Version 2.1-3 00:05:24.369 http://cunit.sourceforge.net/ 00:05:24.369 00:05:24.369 00:05:24.369 Suite: components_suite 00:05:24.627 Test: vtophys_malloc_test ...passed 00:05:24.627 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:24.627 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:24.627 EAL: Restoring previous memory policy: 4 00:05:24.627 EAL: Calling mem event callback 'spdk:(nil)' 00:05:24.627 EAL: request: mp_malloc_sync 00:05:24.627 EAL: No shared files mode enabled, IPC is disabled 00:05:24.627 EAL: Heap on socket 0 was expanded by 4MB 00:05:24.627 EAL: Calling mem event callback 'spdk:(nil)' 00:05:24.627 EAL: request: mp_malloc_sync 00:05:24.627 EAL: No shared files mode enabled, IPC is disabled 00:05:24.627 EAL: Heap on socket 0 was shrunk by 4MB 00:05:24.627 EAL: Trying to obtain current memory policy. 00:05:24.627 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:24.627 EAL: Restoring previous memory policy: 4 00:05:24.627 EAL: Calling mem event callback 'spdk:(nil)' 00:05:24.627 EAL: request: mp_malloc_sync 00:05:24.627 EAL: No shared files mode enabled, IPC is disabled 00:05:24.627 EAL: Heap on socket 0 was expanded by 6MB 00:05:24.627 EAL: Calling mem event callback 'spdk:(nil)' 00:05:24.627 EAL: request: mp_malloc_sync 00:05:24.627 EAL: No shared files mode enabled, IPC is disabled 00:05:24.627 EAL: Heap on socket 0 was shrunk by 6MB 00:05:24.627 EAL: Trying to obtain current memory policy. 00:05:24.627 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:24.627 EAL: Restoring previous memory policy: 4 00:05:24.627 EAL: Calling mem event callback 'spdk:(nil)' 00:05:24.627 EAL: request: mp_malloc_sync 00:05:24.627 EAL: No shared files mode enabled, IPC is disabled 00:05:24.627 EAL: Heap on socket 0 was expanded by 10MB 00:05:24.627 EAL: Calling mem event callback 'spdk:(nil)' 00:05:24.627 EAL: request: mp_malloc_sync 00:05:24.627 EAL: No shared files mode enabled, IPC is disabled 00:05:24.627 EAL: Heap on socket 0 was shrunk by 10MB 00:05:24.627 EAL: Trying to obtain current memory policy. 00:05:24.627 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:24.627 EAL: Restoring previous memory policy: 4 00:05:24.627 EAL: Calling mem event callback 'spdk:(nil)' 00:05:24.627 EAL: request: mp_malloc_sync 00:05:24.627 EAL: No shared files mode enabled, IPC is disabled 00:05:24.627 EAL: Heap on socket 0 was expanded by 18MB 00:05:24.627 EAL: Calling mem event callback 'spdk:(nil)' 00:05:24.627 EAL: request: mp_malloc_sync 00:05:24.627 EAL: No shared files mode enabled, IPC is disabled 00:05:24.627 EAL: Heap on socket 0 was shrunk by 18MB 00:05:24.627 EAL: Trying to obtain current memory policy. 00:05:24.627 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:24.627 EAL: Restoring previous memory policy: 4 00:05:24.627 EAL: Calling mem event callback 'spdk:(nil)' 00:05:24.627 EAL: request: mp_malloc_sync 00:05:24.627 EAL: No shared files mode enabled, IPC is disabled 00:05:24.627 EAL: Heap on socket 0 was expanded by 34MB 00:05:24.627 EAL: Calling mem event callback 'spdk:(nil)' 00:05:24.627 EAL: request: mp_malloc_sync 00:05:24.627 EAL: No shared files mode enabled, IPC is disabled 00:05:24.627 EAL: Heap on socket 0 was shrunk by 34MB 00:05:24.886 EAL: Trying to obtain current memory policy. 00:05:24.886 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:24.886 EAL: Restoring previous memory policy: 4 00:05:24.886 EAL: Calling mem event callback 'spdk:(nil)' 00:05:24.886 EAL: request: mp_malloc_sync 00:05:24.886 EAL: No shared files mode enabled, IPC is disabled 00:05:24.886 EAL: Heap on socket 0 was expanded by 66MB 00:05:24.886 EAL: Calling mem event callback 'spdk:(nil)' 00:05:24.886 EAL: request: mp_malloc_sync 00:05:24.886 EAL: No shared files mode enabled, IPC is disabled 00:05:24.886 EAL: Heap on socket 0 was shrunk by 66MB 00:05:24.886 EAL: Trying to obtain current memory policy. 00:05:24.886 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:24.886 EAL: Restoring previous memory policy: 4 00:05:24.886 EAL: Calling mem event callback 'spdk:(nil)' 00:05:24.886 EAL: request: mp_malloc_sync 00:05:24.886 EAL: No shared files mode enabled, IPC is disabled 00:05:24.886 EAL: Heap on socket 0 was expanded by 130MB 00:05:25.144 EAL: Calling mem event callback 'spdk:(nil)' 00:05:25.144 EAL: request: mp_malloc_sync 00:05:25.144 EAL: No shared files mode enabled, IPC is disabled 00:05:25.144 EAL: Heap on socket 0 was shrunk by 130MB 00:05:25.144 EAL: Trying to obtain current memory policy. 00:05:25.144 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:25.144 EAL: Restoring previous memory policy: 4 00:05:25.144 EAL: Calling mem event callback 'spdk:(nil)' 00:05:25.144 EAL: request: mp_malloc_sync 00:05:25.144 EAL: No shared files mode enabled, IPC is disabled 00:05:25.144 EAL: Heap on socket 0 was expanded by 258MB 00:05:25.711 EAL: Calling mem event callback 'spdk:(nil)' 00:05:25.711 EAL: request: mp_malloc_sync 00:05:25.711 EAL: No shared files mode enabled, IPC is disabled 00:05:25.711 EAL: Heap on socket 0 was shrunk by 258MB 00:05:25.711 EAL: Trying to obtain current memory policy. 00:05:25.711 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:25.969 EAL: Restoring previous memory policy: 4 00:05:25.969 EAL: Calling mem event callback 'spdk:(nil)' 00:05:25.969 EAL: request: mp_malloc_sync 00:05:25.969 EAL: No shared files mode enabled, IPC is disabled 00:05:25.969 EAL: Heap on socket 0 was expanded by 514MB 00:05:26.536 EAL: Calling mem event callback 'spdk:(nil)' 00:05:26.536 EAL: request: mp_malloc_sync 00:05:26.536 EAL: No shared files mode enabled, IPC is disabled 00:05:26.536 EAL: Heap on socket 0 was shrunk by 514MB 00:05:27.101 EAL: Trying to obtain current memory policy. 00:05:27.102 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:27.102 EAL: Restoring previous memory policy: 4 00:05:27.102 EAL: Calling mem event callback 'spdk:(nil)' 00:05:27.102 EAL: request: mp_malloc_sync 00:05:27.102 EAL: No shared files mode enabled, IPC is disabled 00:05:27.102 EAL: Heap on socket 0 was expanded by 1026MB 00:05:28.475 EAL: Calling mem event callback 'spdk:(nil)' 00:05:28.475 EAL: request: mp_malloc_sync 00:05:28.475 EAL: No shared files mode enabled, IPC is disabled 00:05:28.475 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:29.044 passed 00:05:29.044 00:05:29.044 Run Summary: Type Total Ran Passed Failed Inactive 00:05:29.044 suites 1 1 n/a 0 0 00:05:29.044 tests 2 2 2 0 0 00:05:29.044 asserts 5824 5824 5824 0 n/a 00:05:29.044 00:05:29.044 Elapsed time = 4.735 seconds 00:05:29.044 EAL: Calling mem event callback 'spdk:(nil)' 00:05:29.044 EAL: request: mp_malloc_sync 00:05:29.044 EAL: No shared files mode enabled, IPC is disabled 00:05:29.044 EAL: Heap on socket 0 was shrunk by 2MB 00:05:29.044 EAL: No shared files mode enabled, IPC is disabled 00:05:29.044 EAL: No shared files mode enabled, IPC is disabled 00:05:29.044 EAL: No shared files mode enabled, IPC is disabled 00:05:29.044 00:05:29.044 real 0m4.994s 00:05:29.044 user 0m4.233s 00:05:29.044 sys 0m0.616s 00:05:29.044 ************************************ 00:05:29.044 END TEST env_vtophys 00:05:29.044 ************************************ 00:05:29.044 04:22:25 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:29.044 04:22:25 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:29.303 04:22:25 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:29.303 04:22:25 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:29.303 04:22:25 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:29.303 04:22:25 env -- common/autotest_common.sh@10 -- # set +x 00:05:29.303 ************************************ 00:05:29.303 START TEST env_pci 00:05:29.303 ************************************ 00:05:29.303 04:22:25 env.env_pci -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:29.303 00:05:29.303 00:05:29.303 CUnit - A unit testing framework for C - Version 2.1-3 00:05:29.303 http://cunit.sourceforge.net/ 00:05:29.303 00:05:29.303 00:05:29.303 Suite: pci 00:05:29.303 Test: pci_hook ...[2024-11-27 04:22:25.700869] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 57032 has claimed it 00:05:29.303 passed 00:05:29.303 00:05:29.303 Run Summary: Type Total Ran Passed Failed Inactive 00:05:29.303 suites 1 1 n/a 0 0 00:05:29.303 tests 1 1 1 0 0 00:05:29.303 asserts 25 25 25 0 n/a 00:05:29.303 00:05:29.303 Elapsed time = 0.006 seconds 00:05:29.303 EAL: Cannot find device (10000:00:01.0) 00:05:29.303 EAL: Failed to attach device on primary process 00:05:29.303 00:05:29.303 real 0m0.053s 00:05:29.303 user 0m0.019s 00:05:29.303 sys 0m0.034s 00:05:29.303 04:22:25 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:29.303 04:22:25 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:29.303 ************************************ 00:05:29.303 END TEST env_pci 00:05:29.303 ************************************ 00:05:29.303 04:22:25 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:29.303 04:22:25 env -- env/env.sh@15 -- # uname 00:05:29.303 04:22:25 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:29.303 04:22:25 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:29.303 04:22:25 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:29.303 04:22:25 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:05:29.303 04:22:25 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:29.303 04:22:25 env -- common/autotest_common.sh@10 -- # set +x 00:05:29.303 ************************************ 00:05:29.303 START TEST env_dpdk_post_init 00:05:29.303 ************************************ 00:05:29.303 04:22:25 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:29.303 EAL: Detected CPU lcores: 10 00:05:29.303 EAL: Detected NUMA nodes: 1 00:05:29.303 EAL: Detected shared linkage of DPDK 00:05:29.303 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:29.303 EAL: Selected IOVA mode 'PA' 00:05:29.561 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:29.561 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:05:29.561 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:05:29.561 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:12.0 (socket -1) 00:05:29.561 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:13.0 (socket -1) 00:05:29.561 Starting DPDK initialization... 00:05:29.561 Starting SPDK post initialization... 00:05:29.561 SPDK NVMe probe 00:05:29.561 Attaching to 0000:00:10.0 00:05:29.561 Attaching to 0000:00:11.0 00:05:29.561 Attaching to 0000:00:12.0 00:05:29.561 Attaching to 0000:00:13.0 00:05:29.561 Attached to 0000:00:10.0 00:05:29.561 Attached to 0000:00:11.0 00:05:29.561 Attached to 0000:00:13.0 00:05:29.561 Attached to 0000:00:12.0 00:05:29.561 Cleaning up... 00:05:29.561 00:05:29.561 real 0m0.220s 00:05:29.561 user 0m0.069s 00:05:29.561 sys 0m0.052s 00:05:29.561 04:22:26 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:29.561 ************************************ 00:05:29.561 END TEST env_dpdk_post_init 00:05:29.561 ************************************ 00:05:29.561 04:22:26 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:29.561 04:22:26 env -- env/env.sh@26 -- # uname 00:05:29.561 04:22:26 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:29.561 04:22:26 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:29.561 04:22:26 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:29.561 04:22:26 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:29.561 04:22:26 env -- common/autotest_common.sh@10 -- # set +x 00:05:29.561 ************************************ 00:05:29.561 START TEST env_mem_callbacks 00:05:29.561 ************************************ 00:05:29.561 04:22:26 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:29.561 EAL: Detected CPU lcores: 10 00:05:29.561 EAL: Detected NUMA nodes: 1 00:05:29.561 EAL: Detected shared linkage of DPDK 00:05:29.561 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:29.561 EAL: Selected IOVA mode 'PA' 00:05:29.819 00:05:29.819 00:05:29.819 CUnit - A unit testing framework for C - Version 2.1-3 00:05:29.819 http://cunit.sourceforge.net/ 00:05:29.819 00:05:29.819 00:05:29.819 Suite: memory 00:05:29.819 Test: test ... 00:05:29.819 register 0x200000200000 2097152 00:05:29.819 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:29.819 malloc 3145728 00:05:29.819 register 0x200000400000 4194304 00:05:29.819 buf 0x2000004fffc0 len 3145728 PASSED 00:05:29.819 malloc 64 00:05:29.819 buf 0x2000004ffec0 len 64 PASSED 00:05:29.819 malloc 4194304 00:05:29.819 register 0x200000800000 6291456 00:05:29.819 buf 0x2000009fffc0 len 4194304 PASSED 00:05:29.820 free 0x2000004fffc0 3145728 00:05:29.820 free 0x2000004ffec0 64 00:05:29.820 unregister 0x200000400000 4194304 PASSED 00:05:29.820 free 0x2000009fffc0 4194304 00:05:29.820 unregister 0x200000800000 6291456 PASSED 00:05:29.820 malloc 8388608 00:05:29.820 register 0x200000400000 10485760 00:05:29.820 buf 0x2000005fffc0 len 8388608 PASSED 00:05:29.820 free 0x2000005fffc0 8388608 00:05:29.820 unregister 0x200000400000 10485760 PASSED 00:05:29.820 passed 00:05:29.820 00:05:29.820 Run Summary: Type Total Ran Passed Failed Inactive 00:05:29.820 suites 1 1 n/a 0 0 00:05:29.820 tests 1 1 1 0 0 00:05:29.820 asserts 15 15 15 0 n/a 00:05:29.820 00:05:29.820 Elapsed time = 0.040 seconds 00:05:29.820 00:05:29.820 real 0m0.209s 00:05:29.820 user 0m0.061s 00:05:29.820 sys 0m0.045s 00:05:29.820 04:22:26 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:29.820 04:22:26 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:29.820 ************************************ 00:05:29.820 END TEST env_mem_callbacks 00:05:29.820 ************************************ 00:05:29.820 00:05:29.820 real 0m6.181s 00:05:29.820 user 0m4.760s 00:05:29.820 sys 0m0.988s 00:05:29.820 04:22:26 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:29.820 ************************************ 00:05:29.820 END TEST env 00:05:29.820 ************************************ 00:05:29.820 04:22:26 env -- common/autotest_common.sh@10 -- # set +x 00:05:29.820 04:22:26 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:29.820 04:22:26 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:29.820 04:22:26 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:29.820 04:22:26 -- common/autotest_common.sh@10 -- # set +x 00:05:29.820 ************************************ 00:05:29.820 START TEST rpc 00:05:29.820 ************************************ 00:05:29.820 04:22:26 rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:30.077 * Looking for test storage... 00:05:30.077 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:30.077 04:22:26 rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:30.077 04:22:26 rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:05:30.077 04:22:26 rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:30.077 04:22:26 rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:30.077 04:22:26 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:30.077 04:22:26 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:30.077 04:22:26 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:30.077 04:22:26 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:30.077 04:22:26 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:30.077 04:22:26 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:30.077 04:22:26 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:30.077 04:22:26 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:30.077 04:22:26 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:30.077 04:22:26 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:30.077 04:22:26 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:30.078 04:22:26 rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:30.078 04:22:26 rpc -- scripts/common.sh@345 -- # : 1 00:05:30.078 04:22:26 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:30.078 04:22:26 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:30.078 04:22:26 rpc -- scripts/common.sh@365 -- # decimal 1 00:05:30.078 04:22:26 rpc -- scripts/common.sh@353 -- # local d=1 00:05:30.078 04:22:26 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:30.078 04:22:26 rpc -- scripts/common.sh@355 -- # echo 1 00:05:30.078 04:22:26 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:30.078 04:22:26 rpc -- scripts/common.sh@366 -- # decimal 2 00:05:30.078 04:22:26 rpc -- scripts/common.sh@353 -- # local d=2 00:05:30.078 04:22:26 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:30.078 04:22:26 rpc -- scripts/common.sh@355 -- # echo 2 00:05:30.078 04:22:26 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:30.078 04:22:26 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:30.078 04:22:26 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:30.078 04:22:26 rpc -- scripts/common.sh@368 -- # return 0 00:05:30.078 04:22:26 rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:30.078 04:22:26 rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:30.078 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:30.078 --rc genhtml_branch_coverage=1 00:05:30.078 --rc genhtml_function_coverage=1 00:05:30.078 --rc genhtml_legend=1 00:05:30.078 --rc geninfo_all_blocks=1 00:05:30.078 --rc geninfo_unexecuted_blocks=1 00:05:30.078 00:05:30.078 ' 00:05:30.078 04:22:26 rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:30.078 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:30.078 --rc genhtml_branch_coverage=1 00:05:30.078 --rc genhtml_function_coverage=1 00:05:30.078 --rc genhtml_legend=1 00:05:30.078 --rc geninfo_all_blocks=1 00:05:30.078 --rc geninfo_unexecuted_blocks=1 00:05:30.078 00:05:30.078 ' 00:05:30.078 04:22:26 rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:30.078 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:30.078 --rc genhtml_branch_coverage=1 00:05:30.078 --rc genhtml_function_coverage=1 00:05:30.078 --rc genhtml_legend=1 00:05:30.078 --rc geninfo_all_blocks=1 00:05:30.078 --rc geninfo_unexecuted_blocks=1 00:05:30.078 00:05:30.078 ' 00:05:30.078 04:22:26 rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:30.078 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:30.078 --rc genhtml_branch_coverage=1 00:05:30.078 --rc genhtml_function_coverage=1 00:05:30.078 --rc genhtml_legend=1 00:05:30.078 --rc geninfo_all_blocks=1 00:05:30.078 --rc geninfo_unexecuted_blocks=1 00:05:30.078 00:05:30.078 ' 00:05:30.078 04:22:26 rpc -- rpc/rpc.sh@65 -- # spdk_pid=57159 00:05:30.078 04:22:26 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:30.078 04:22:26 rpc -- rpc/rpc.sh@67 -- # waitforlisten 57159 00:05:30.078 04:22:26 rpc -- common/autotest_common.sh@835 -- # '[' -z 57159 ']' 00:05:30.078 04:22:26 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:30.078 04:22:26 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:30.078 04:22:26 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:30.078 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:30.078 04:22:26 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:30.078 04:22:26 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:30.078 04:22:26 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:05:30.078 [2024-11-27 04:22:26.570541] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:05:30.078 [2024-11-27 04:22:26.570662] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57159 ] 00:05:30.335 [2024-11-27 04:22:26.724336] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:30.335 [2024-11-27 04:22:26.801681] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:30.335 [2024-11-27 04:22:26.801737] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 57159' to capture a snapshot of events at runtime. 00:05:30.335 [2024-11-27 04:22:26.801745] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:30.335 [2024-11-27 04:22:26.801753] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:30.335 [2024-11-27 04:22:26.801759] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid57159 for offline analysis/debug. 00:05:30.335 [2024-11-27 04:22:26.802408] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:30.901 04:22:27 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:30.901 04:22:27 rpc -- common/autotest_common.sh@868 -- # return 0 00:05:30.901 04:22:27 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:30.901 04:22:27 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:30.901 04:22:27 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:30.902 04:22:27 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:30.902 04:22:27 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:30.902 04:22:27 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:30.902 04:22:27 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:30.902 ************************************ 00:05:30.902 START TEST rpc_integrity 00:05:30.902 ************************************ 00:05:30.902 04:22:27 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:05:30.902 04:22:27 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:30.902 04:22:27 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:30.902 04:22:27 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:30.902 04:22:27 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:30.902 04:22:27 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:30.902 04:22:27 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:30.902 04:22:27 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:30.902 04:22:27 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:30.902 04:22:27 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:30.902 04:22:27 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:30.902 04:22:27 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:30.902 04:22:27 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:30.902 04:22:27 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:30.902 04:22:27 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:30.902 04:22:27 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:30.902 04:22:27 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:30.902 04:22:27 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:30.902 { 00:05:30.902 "name": "Malloc0", 00:05:30.902 "aliases": [ 00:05:30.902 "8d5f4a51-3549-42d8-a61d-bb9e5af447ea" 00:05:30.902 ], 00:05:30.902 "product_name": "Malloc disk", 00:05:30.902 "block_size": 512, 00:05:30.902 "num_blocks": 16384, 00:05:30.902 "uuid": "8d5f4a51-3549-42d8-a61d-bb9e5af447ea", 00:05:30.902 "assigned_rate_limits": { 00:05:30.902 "rw_ios_per_sec": 0, 00:05:30.902 "rw_mbytes_per_sec": 0, 00:05:30.902 "r_mbytes_per_sec": 0, 00:05:30.902 "w_mbytes_per_sec": 0 00:05:30.902 }, 00:05:30.902 "claimed": false, 00:05:30.902 "zoned": false, 00:05:30.902 "supported_io_types": { 00:05:30.902 "read": true, 00:05:30.902 "write": true, 00:05:30.902 "unmap": true, 00:05:30.902 "flush": true, 00:05:30.902 "reset": true, 00:05:30.902 "nvme_admin": false, 00:05:30.902 "nvme_io": false, 00:05:30.902 "nvme_io_md": false, 00:05:30.902 "write_zeroes": true, 00:05:30.902 "zcopy": true, 00:05:30.902 "get_zone_info": false, 00:05:30.902 "zone_management": false, 00:05:30.902 "zone_append": false, 00:05:30.902 "compare": false, 00:05:30.902 "compare_and_write": false, 00:05:30.902 "abort": true, 00:05:30.902 "seek_hole": false, 00:05:30.902 "seek_data": false, 00:05:30.902 "copy": true, 00:05:30.902 "nvme_iov_md": false 00:05:30.902 }, 00:05:30.902 "memory_domains": [ 00:05:30.902 { 00:05:30.902 "dma_device_id": "system", 00:05:30.902 "dma_device_type": 1 00:05:30.902 }, 00:05:30.902 { 00:05:30.902 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:30.902 "dma_device_type": 2 00:05:30.902 } 00:05:30.902 ], 00:05:30.902 "driver_specific": {} 00:05:30.902 } 00:05:30.902 ]' 00:05:30.902 04:22:27 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:31.161 04:22:27 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:31.161 04:22:27 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:31.161 04:22:27 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:31.161 04:22:27 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:31.161 [2024-11-27 04:22:27.512502] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:31.161 [2024-11-27 04:22:27.512549] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:31.161 [2024-11-27 04:22:27.512568] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:05:31.161 [2024-11-27 04:22:27.512578] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:31.161 [2024-11-27 04:22:27.514262] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:31.161 [2024-11-27 04:22:27.514298] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:31.161 Passthru0 00:05:31.161 04:22:27 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:31.161 04:22:27 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:31.161 04:22:27 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:31.161 04:22:27 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:31.161 04:22:27 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:31.161 04:22:27 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:31.161 { 00:05:31.161 "name": "Malloc0", 00:05:31.161 "aliases": [ 00:05:31.161 "8d5f4a51-3549-42d8-a61d-bb9e5af447ea" 00:05:31.161 ], 00:05:31.161 "product_name": "Malloc disk", 00:05:31.161 "block_size": 512, 00:05:31.161 "num_blocks": 16384, 00:05:31.161 "uuid": "8d5f4a51-3549-42d8-a61d-bb9e5af447ea", 00:05:31.161 "assigned_rate_limits": { 00:05:31.161 "rw_ios_per_sec": 0, 00:05:31.161 "rw_mbytes_per_sec": 0, 00:05:31.161 "r_mbytes_per_sec": 0, 00:05:31.161 "w_mbytes_per_sec": 0 00:05:31.161 }, 00:05:31.161 "claimed": true, 00:05:31.161 "claim_type": "exclusive_write", 00:05:31.161 "zoned": false, 00:05:31.161 "supported_io_types": { 00:05:31.161 "read": true, 00:05:31.161 "write": true, 00:05:31.161 "unmap": true, 00:05:31.161 "flush": true, 00:05:31.161 "reset": true, 00:05:31.161 "nvme_admin": false, 00:05:31.161 "nvme_io": false, 00:05:31.161 "nvme_io_md": false, 00:05:31.161 "write_zeroes": true, 00:05:31.161 "zcopy": true, 00:05:31.161 "get_zone_info": false, 00:05:31.161 "zone_management": false, 00:05:31.161 "zone_append": false, 00:05:31.161 "compare": false, 00:05:31.161 "compare_and_write": false, 00:05:31.161 "abort": true, 00:05:31.161 "seek_hole": false, 00:05:31.161 "seek_data": false, 00:05:31.161 "copy": true, 00:05:31.161 "nvme_iov_md": false 00:05:31.161 }, 00:05:31.161 "memory_domains": [ 00:05:31.161 { 00:05:31.161 "dma_device_id": "system", 00:05:31.161 "dma_device_type": 1 00:05:31.161 }, 00:05:31.161 { 00:05:31.161 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:31.161 "dma_device_type": 2 00:05:31.161 } 00:05:31.161 ], 00:05:31.161 "driver_specific": {} 00:05:31.161 }, 00:05:31.161 { 00:05:31.161 "name": "Passthru0", 00:05:31.161 "aliases": [ 00:05:31.161 "608591cd-557a-504f-a197-773440b906a0" 00:05:31.161 ], 00:05:31.161 "product_name": "passthru", 00:05:31.161 "block_size": 512, 00:05:31.161 "num_blocks": 16384, 00:05:31.161 "uuid": "608591cd-557a-504f-a197-773440b906a0", 00:05:31.161 "assigned_rate_limits": { 00:05:31.161 "rw_ios_per_sec": 0, 00:05:31.161 "rw_mbytes_per_sec": 0, 00:05:31.161 "r_mbytes_per_sec": 0, 00:05:31.161 "w_mbytes_per_sec": 0 00:05:31.161 }, 00:05:31.161 "claimed": false, 00:05:31.161 "zoned": false, 00:05:31.161 "supported_io_types": { 00:05:31.161 "read": true, 00:05:31.161 "write": true, 00:05:31.161 "unmap": true, 00:05:31.161 "flush": true, 00:05:31.161 "reset": true, 00:05:31.161 "nvme_admin": false, 00:05:31.161 "nvme_io": false, 00:05:31.161 "nvme_io_md": false, 00:05:31.161 "write_zeroes": true, 00:05:31.161 "zcopy": true, 00:05:31.161 "get_zone_info": false, 00:05:31.161 "zone_management": false, 00:05:31.161 "zone_append": false, 00:05:31.161 "compare": false, 00:05:31.161 "compare_and_write": false, 00:05:31.161 "abort": true, 00:05:31.161 "seek_hole": false, 00:05:31.161 "seek_data": false, 00:05:31.161 "copy": true, 00:05:31.161 "nvme_iov_md": false 00:05:31.161 }, 00:05:31.161 "memory_domains": [ 00:05:31.161 { 00:05:31.161 "dma_device_id": "system", 00:05:31.161 "dma_device_type": 1 00:05:31.161 }, 00:05:31.161 { 00:05:31.161 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:31.161 "dma_device_type": 2 00:05:31.161 } 00:05:31.161 ], 00:05:31.161 "driver_specific": { 00:05:31.161 "passthru": { 00:05:31.161 "name": "Passthru0", 00:05:31.161 "base_bdev_name": "Malloc0" 00:05:31.161 } 00:05:31.161 } 00:05:31.161 } 00:05:31.161 ]' 00:05:31.161 04:22:27 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:31.161 04:22:27 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:31.161 04:22:27 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:31.161 04:22:27 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:31.161 04:22:27 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:31.161 04:22:27 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:31.161 04:22:27 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:31.161 04:22:27 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:31.161 04:22:27 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:31.161 04:22:27 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:31.161 04:22:27 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:31.161 04:22:27 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:31.161 04:22:27 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:31.161 04:22:27 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:31.161 04:22:27 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:31.161 04:22:27 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:31.161 04:22:27 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:31.161 00:05:31.161 real 0m0.236s 00:05:31.161 user 0m0.130s 00:05:31.161 sys 0m0.031s 00:05:31.161 04:22:27 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:31.161 04:22:27 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:31.161 ************************************ 00:05:31.161 END TEST rpc_integrity 00:05:31.161 ************************************ 00:05:31.161 04:22:27 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:31.161 04:22:27 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:31.162 04:22:27 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:31.162 04:22:27 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:31.162 ************************************ 00:05:31.162 START TEST rpc_plugins 00:05:31.162 ************************************ 00:05:31.162 04:22:27 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:05:31.162 04:22:27 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:31.162 04:22:27 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:31.162 04:22:27 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:31.162 04:22:27 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:31.162 04:22:27 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:31.162 04:22:27 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:31.162 04:22:27 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:31.162 04:22:27 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:31.162 04:22:27 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:31.162 04:22:27 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:31.162 { 00:05:31.162 "name": "Malloc1", 00:05:31.162 "aliases": [ 00:05:31.162 "e975c9f6-5223-438f-8564-50ea378f2c1b" 00:05:31.162 ], 00:05:31.162 "product_name": "Malloc disk", 00:05:31.162 "block_size": 4096, 00:05:31.162 "num_blocks": 256, 00:05:31.162 "uuid": "e975c9f6-5223-438f-8564-50ea378f2c1b", 00:05:31.162 "assigned_rate_limits": { 00:05:31.162 "rw_ios_per_sec": 0, 00:05:31.162 "rw_mbytes_per_sec": 0, 00:05:31.162 "r_mbytes_per_sec": 0, 00:05:31.162 "w_mbytes_per_sec": 0 00:05:31.162 }, 00:05:31.162 "claimed": false, 00:05:31.162 "zoned": false, 00:05:31.162 "supported_io_types": { 00:05:31.162 "read": true, 00:05:31.162 "write": true, 00:05:31.162 "unmap": true, 00:05:31.162 "flush": true, 00:05:31.162 "reset": true, 00:05:31.162 "nvme_admin": false, 00:05:31.162 "nvme_io": false, 00:05:31.162 "nvme_io_md": false, 00:05:31.162 "write_zeroes": true, 00:05:31.162 "zcopy": true, 00:05:31.162 "get_zone_info": false, 00:05:31.162 "zone_management": false, 00:05:31.162 "zone_append": false, 00:05:31.162 "compare": false, 00:05:31.162 "compare_and_write": false, 00:05:31.162 "abort": true, 00:05:31.162 "seek_hole": false, 00:05:31.162 "seek_data": false, 00:05:31.162 "copy": true, 00:05:31.162 "nvme_iov_md": false 00:05:31.162 }, 00:05:31.162 "memory_domains": [ 00:05:31.162 { 00:05:31.162 "dma_device_id": "system", 00:05:31.162 "dma_device_type": 1 00:05:31.162 }, 00:05:31.162 { 00:05:31.162 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:31.162 "dma_device_type": 2 00:05:31.162 } 00:05:31.162 ], 00:05:31.162 "driver_specific": {} 00:05:31.162 } 00:05:31.162 ]' 00:05:31.162 04:22:27 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:31.162 04:22:27 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:31.162 04:22:27 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:31.162 04:22:27 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:31.162 04:22:27 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:31.162 04:22:27 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:31.162 04:22:27 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:31.162 04:22:27 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:31.162 04:22:27 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:31.419 04:22:27 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:31.419 04:22:27 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:31.420 04:22:27 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:31.420 ************************************ 00:05:31.420 END TEST rpc_plugins 00:05:31.420 ************************************ 00:05:31.420 04:22:27 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:31.420 00:05:31.420 real 0m0.111s 00:05:31.420 user 0m0.064s 00:05:31.420 sys 0m0.014s 00:05:31.420 04:22:27 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:31.420 04:22:27 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:31.420 04:22:27 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:31.420 04:22:27 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:31.420 04:22:27 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:31.420 04:22:27 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:31.420 ************************************ 00:05:31.420 START TEST rpc_trace_cmd_test 00:05:31.420 ************************************ 00:05:31.420 04:22:27 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:05:31.420 04:22:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:31.420 04:22:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:31.420 04:22:27 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:31.420 04:22:27 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:31.420 04:22:27 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:31.420 04:22:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:31.420 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid57159", 00:05:31.420 "tpoint_group_mask": "0x8", 00:05:31.420 "iscsi_conn": { 00:05:31.420 "mask": "0x2", 00:05:31.420 "tpoint_mask": "0x0" 00:05:31.420 }, 00:05:31.420 "scsi": { 00:05:31.420 "mask": "0x4", 00:05:31.420 "tpoint_mask": "0x0" 00:05:31.420 }, 00:05:31.420 "bdev": { 00:05:31.420 "mask": "0x8", 00:05:31.420 "tpoint_mask": "0xffffffffffffffff" 00:05:31.420 }, 00:05:31.420 "nvmf_rdma": { 00:05:31.420 "mask": "0x10", 00:05:31.420 "tpoint_mask": "0x0" 00:05:31.420 }, 00:05:31.420 "nvmf_tcp": { 00:05:31.420 "mask": "0x20", 00:05:31.420 "tpoint_mask": "0x0" 00:05:31.420 }, 00:05:31.420 "ftl": { 00:05:31.420 "mask": "0x40", 00:05:31.420 "tpoint_mask": "0x0" 00:05:31.420 }, 00:05:31.420 "blobfs": { 00:05:31.420 "mask": "0x80", 00:05:31.420 "tpoint_mask": "0x0" 00:05:31.420 }, 00:05:31.420 "dsa": { 00:05:31.420 "mask": "0x200", 00:05:31.420 "tpoint_mask": "0x0" 00:05:31.420 }, 00:05:31.420 "thread": { 00:05:31.420 "mask": "0x400", 00:05:31.420 "tpoint_mask": "0x0" 00:05:31.420 }, 00:05:31.420 "nvme_pcie": { 00:05:31.420 "mask": "0x800", 00:05:31.420 "tpoint_mask": "0x0" 00:05:31.420 }, 00:05:31.420 "iaa": { 00:05:31.420 "mask": "0x1000", 00:05:31.420 "tpoint_mask": "0x0" 00:05:31.420 }, 00:05:31.420 "nvme_tcp": { 00:05:31.420 "mask": "0x2000", 00:05:31.420 "tpoint_mask": "0x0" 00:05:31.420 }, 00:05:31.420 "bdev_nvme": { 00:05:31.420 "mask": "0x4000", 00:05:31.420 "tpoint_mask": "0x0" 00:05:31.420 }, 00:05:31.420 "sock": { 00:05:31.420 "mask": "0x8000", 00:05:31.420 "tpoint_mask": "0x0" 00:05:31.420 }, 00:05:31.420 "blob": { 00:05:31.420 "mask": "0x10000", 00:05:31.420 "tpoint_mask": "0x0" 00:05:31.420 }, 00:05:31.420 "bdev_raid": { 00:05:31.420 "mask": "0x20000", 00:05:31.420 "tpoint_mask": "0x0" 00:05:31.420 }, 00:05:31.420 "scheduler": { 00:05:31.420 "mask": "0x40000", 00:05:31.420 "tpoint_mask": "0x0" 00:05:31.420 } 00:05:31.420 }' 00:05:31.420 04:22:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:31.420 04:22:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:05:31.420 04:22:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:31.420 04:22:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:31.420 04:22:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:31.420 04:22:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:31.420 04:22:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:31.420 04:22:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:31.420 04:22:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:31.420 04:22:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:31.420 00:05:31.420 real 0m0.165s 00:05:31.420 user 0m0.128s 00:05:31.420 sys 0m0.026s 00:05:31.420 04:22:27 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:31.420 04:22:27 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:31.420 ************************************ 00:05:31.420 END TEST rpc_trace_cmd_test 00:05:31.420 ************************************ 00:05:31.677 04:22:28 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:31.677 04:22:28 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:31.677 04:22:28 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:31.677 04:22:28 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:31.677 04:22:28 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:31.677 04:22:28 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:31.677 ************************************ 00:05:31.677 START TEST rpc_daemon_integrity 00:05:31.677 ************************************ 00:05:31.677 04:22:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:05:31.677 04:22:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:31.677 04:22:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:31.677 04:22:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:31.677 04:22:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:31.677 04:22:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:31.677 04:22:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:31.677 04:22:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:31.677 04:22:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:31.677 04:22:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:31.677 04:22:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:31.677 04:22:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:31.677 04:22:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:31.677 04:22:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:31.677 04:22:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:31.677 04:22:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:31.677 04:22:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:31.677 04:22:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:31.677 { 00:05:31.677 "name": "Malloc2", 00:05:31.677 "aliases": [ 00:05:31.677 "53fda138-0325-4c71-8053-f2a858fb1516" 00:05:31.677 ], 00:05:31.677 "product_name": "Malloc disk", 00:05:31.677 "block_size": 512, 00:05:31.677 "num_blocks": 16384, 00:05:31.677 "uuid": "53fda138-0325-4c71-8053-f2a858fb1516", 00:05:31.677 "assigned_rate_limits": { 00:05:31.677 "rw_ios_per_sec": 0, 00:05:31.677 "rw_mbytes_per_sec": 0, 00:05:31.677 "r_mbytes_per_sec": 0, 00:05:31.677 "w_mbytes_per_sec": 0 00:05:31.677 }, 00:05:31.677 "claimed": false, 00:05:31.678 "zoned": false, 00:05:31.678 "supported_io_types": { 00:05:31.678 "read": true, 00:05:31.678 "write": true, 00:05:31.678 "unmap": true, 00:05:31.678 "flush": true, 00:05:31.678 "reset": true, 00:05:31.678 "nvme_admin": false, 00:05:31.678 "nvme_io": false, 00:05:31.678 "nvme_io_md": false, 00:05:31.678 "write_zeroes": true, 00:05:31.678 "zcopy": true, 00:05:31.678 "get_zone_info": false, 00:05:31.678 "zone_management": false, 00:05:31.678 "zone_append": false, 00:05:31.678 "compare": false, 00:05:31.678 "compare_and_write": false, 00:05:31.678 "abort": true, 00:05:31.678 "seek_hole": false, 00:05:31.678 "seek_data": false, 00:05:31.678 "copy": true, 00:05:31.678 "nvme_iov_md": false 00:05:31.678 }, 00:05:31.678 "memory_domains": [ 00:05:31.678 { 00:05:31.678 "dma_device_id": "system", 00:05:31.678 "dma_device_type": 1 00:05:31.678 }, 00:05:31.678 { 00:05:31.678 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:31.678 "dma_device_type": 2 00:05:31.678 } 00:05:31.678 ], 00:05:31.678 "driver_specific": {} 00:05:31.678 } 00:05:31.678 ]' 00:05:31.678 04:22:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:31.678 04:22:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:31.678 04:22:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:31.678 04:22:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:31.678 04:22:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:31.678 [2024-11-27 04:22:28.124178] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:31.678 [2024-11-27 04:22:28.124218] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:31.678 [2024-11-27 04:22:28.124233] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:05:31.678 [2024-11-27 04:22:28.124241] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:31.678 [2024-11-27 04:22:28.125895] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:31.678 [2024-11-27 04:22:28.125923] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:31.678 Passthru0 00:05:31.678 04:22:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:31.678 04:22:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:31.678 04:22:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:31.678 04:22:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:31.678 04:22:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:31.678 04:22:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:31.678 { 00:05:31.678 "name": "Malloc2", 00:05:31.678 "aliases": [ 00:05:31.678 "53fda138-0325-4c71-8053-f2a858fb1516" 00:05:31.678 ], 00:05:31.678 "product_name": "Malloc disk", 00:05:31.678 "block_size": 512, 00:05:31.678 "num_blocks": 16384, 00:05:31.678 "uuid": "53fda138-0325-4c71-8053-f2a858fb1516", 00:05:31.678 "assigned_rate_limits": { 00:05:31.678 "rw_ios_per_sec": 0, 00:05:31.678 "rw_mbytes_per_sec": 0, 00:05:31.678 "r_mbytes_per_sec": 0, 00:05:31.678 "w_mbytes_per_sec": 0 00:05:31.678 }, 00:05:31.678 "claimed": true, 00:05:31.678 "claim_type": "exclusive_write", 00:05:31.678 "zoned": false, 00:05:31.678 "supported_io_types": { 00:05:31.678 "read": true, 00:05:31.678 "write": true, 00:05:31.678 "unmap": true, 00:05:31.678 "flush": true, 00:05:31.678 "reset": true, 00:05:31.678 "nvme_admin": false, 00:05:31.678 "nvme_io": false, 00:05:31.678 "nvme_io_md": false, 00:05:31.678 "write_zeroes": true, 00:05:31.678 "zcopy": true, 00:05:31.678 "get_zone_info": false, 00:05:31.678 "zone_management": false, 00:05:31.678 "zone_append": false, 00:05:31.678 "compare": false, 00:05:31.678 "compare_and_write": false, 00:05:31.678 "abort": true, 00:05:31.678 "seek_hole": false, 00:05:31.678 "seek_data": false, 00:05:31.678 "copy": true, 00:05:31.678 "nvme_iov_md": false 00:05:31.678 }, 00:05:31.678 "memory_domains": [ 00:05:31.678 { 00:05:31.678 "dma_device_id": "system", 00:05:31.678 "dma_device_type": 1 00:05:31.678 }, 00:05:31.678 { 00:05:31.678 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:31.678 "dma_device_type": 2 00:05:31.678 } 00:05:31.678 ], 00:05:31.678 "driver_specific": {} 00:05:31.678 }, 00:05:31.678 { 00:05:31.678 "name": "Passthru0", 00:05:31.678 "aliases": [ 00:05:31.678 "a1724e1e-3886-57f1-add7-db8abec2a4f3" 00:05:31.678 ], 00:05:31.678 "product_name": "passthru", 00:05:31.678 "block_size": 512, 00:05:31.678 "num_blocks": 16384, 00:05:31.678 "uuid": "a1724e1e-3886-57f1-add7-db8abec2a4f3", 00:05:31.678 "assigned_rate_limits": { 00:05:31.678 "rw_ios_per_sec": 0, 00:05:31.678 "rw_mbytes_per_sec": 0, 00:05:31.678 "r_mbytes_per_sec": 0, 00:05:31.678 "w_mbytes_per_sec": 0 00:05:31.678 }, 00:05:31.678 "claimed": false, 00:05:31.678 "zoned": false, 00:05:31.678 "supported_io_types": { 00:05:31.678 "read": true, 00:05:31.678 "write": true, 00:05:31.678 "unmap": true, 00:05:31.678 "flush": true, 00:05:31.678 "reset": true, 00:05:31.678 "nvme_admin": false, 00:05:31.678 "nvme_io": false, 00:05:31.678 "nvme_io_md": false, 00:05:31.678 "write_zeroes": true, 00:05:31.678 "zcopy": true, 00:05:31.678 "get_zone_info": false, 00:05:31.678 "zone_management": false, 00:05:31.678 "zone_append": false, 00:05:31.678 "compare": false, 00:05:31.678 "compare_and_write": false, 00:05:31.678 "abort": true, 00:05:31.678 "seek_hole": false, 00:05:31.678 "seek_data": false, 00:05:31.678 "copy": true, 00:05:31.678 "nvme_iov_md": false 00:05:31.678 }, 00:05:31.678 "memory_domains": [ 00:05:31.678 { 00:05:31.678 "dma_device_id": "system", 00:05:31.678 "dma_device_type": 1 00:05:31.678 }, 00:05:31.678 { 00:05:31.678 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:31.678 "dma_device_type": 2 00:05:31.678 } 00:05:31.678 ], 00:05:31.678 "driver_specific": { 00:05:31.678 "passthru": { 00:05:31.678 "name": "Passthru0", 00:05:31.678 "base_bdev_name": "Malloc2" 00:05:31.678 } 00:05:31.678 } 00:05:31.678 } 00:05:31.678 ]' 00:05:31.678 04:22:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:31.678 04:22:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:31.678 04:22:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:31.678 04:22:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:31.678 04:22:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:31.678 04:22:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:31.678 04:22:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:31.678 04:22:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:31.678 04:22:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:31.678 04:22:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:31.678 04:22:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:31.678 04:22:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:31.678 04:22:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:31.678 04:22:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:31.678 04:22:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:31.678 04:22:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:31.678 04:22:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:31.678 00:05:31.678 real 0m0.222s 00:05:31.678 user 0m0.118s 00:05:31.678 sys 0m0.036s 00:05:31.678 04:22:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:31.678 04:22:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:31.678 ************************************ 00:05:31.678 END TEST rpc_daemon_integrity 00:05:31.678 ************************************ 00:05:31.936 04:22:28 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:31.936 04:22:28 rpc -- rpc/rpc.sh@84 -- # killprocess 57159 00:05:31.936 04:22:28 rpc -- common/autotest_common.sh@954 -- # '[' -z 57159 ']' 00:05:31.936 04:22:28 rpc -- common/autotest_common.sh@958 -- # kill -0 57159 00:05:31.936 04:22:28 rpc -- common/autotest_common.sh@959 -- # uname 00:05:31.936 04:22:28 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:31.936 04:22:28 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57159 00:05:31.936 04:22:28 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:31.936 04:22:28 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:31.936 04:22:28 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57159' 00:05:31.936 killing process with pid 57159 00:05:31.936 04:22:28 rpc -- common/autotest_common.sh@973 -- # kill 57159 00:05:31.936 04:22:28 rpc -- common/autotest_common.sh@978 -- # wait 57159 00:05:33.310 00:05:33.310 real 0m3.109s 00:05:33.310 user 0m3.532s 00:05:33.310 sys 0m0.560s 00:05:33.310 04:22:29 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:33.310 04:22:29 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:33.310 ************************************ 00:05:33.310 END TEST rpc 00:05:33.310 ************************************ 00:05:33.310 04:22:29 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:33.310 04:22:29 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:33.310 04:22:29 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:33.310 04:22:29 -- common/autotest_common.sh@10 -- # set +x 00:05:33.310 ************************************ 00:05:33.310 START TEST skip_rpc 00:05:33.310 ************************************ 00:05:33.310 04:22:29 skip_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:33.310 * Looking for test storage... 00:05:33.310 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:33.310 04:22:29 skip_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:33.310 04:22:29 skip_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:33.310 04:22:29 skip_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:05:33.310 04:22:29 skip_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:33.310 04:22:29 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:33.310 04:22:29 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:33.310 04:22:29 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:33.310 04:22:29 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:33.310 04:22:29 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:33.310 04:22:29 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:33.310 04:22:29 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:33.310 04:22:29 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:33.310 04:22:29 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:33.310 04:22:29 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:33.310 04:22:29 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:33.310 04:22:29 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:33.310 04:22:29 skip_rpc -- scripts/common.sh@345 -- # : 1 00:05:33.310 04:22:29 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:33.310 04:22:29 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:33.310 04:22:29 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:33.310 04:22:29 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:05:33.310 04:22:29 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:33.310 04:22:29 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:05:33.310 04:22:29 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:33.310 04:22:29 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:33.310 04:22:29 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:05:33.310 04:22:29 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:33.310 04:22:29 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:05:33.310 04:22:29 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:33.310 04:22:29 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:33.310 04:22:29 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:33.310 04:22:29 skip_rpc -- scripts/common.sh@368 -- # return 0 00:05:33.310 04:22:29 skip_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:33.310 04:22:29 skip_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:33.310 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:33.310 --rc genhtml_branch_coverage=1 00:05:33.310 --rc genhtml_function_coverage=1 00:05:33.310 --rc genhtml_legend=1 00:05:33.310 --rc geninfo_all_blocks=1 00:05:33.310 --rc geninfo_unexecuted_blocks=1 00:05:33.310 00:05:33.310 ' 00:05:33.310 04:22:29 skip_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:33.310 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:33.310 --rc genhtml_branch_coverage=1 00:05:33.310 --rc genhtml_function_coverage=1 00:05:33.310 --rc genhtml_legend=1 00:05:33.310 --rc geninfo_all_blocks=1 00:05:33.310 --rc geninfo_unexecuted_blocks=1 00:05:33.310 00:05:33.310 ' 00:05:33.310 04:22:29 skip_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:33.310 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:33.310 --rc genhtml_branch_coverage=1 00:05:33.310 --rc genhtml_function_coverage=1 00:05:33.310 --rc genhtml_legend=1 00:05:33.310 --rc geninfo_all_blocks=1 00:05:33.310 --rc geninfo_unexecuted_blocks=1 00:05:33.310 00:05:33.310 ' 00:05:33.310 04:22:29 skip_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:33.310 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:33.310 --rc genhtml_branch_coverage=1 00:05:33.310 --rc genhtml_function_coverage=1 00:05:33.310 --rc genhtml_legend=1 00:05:33.310 --rc geninfo_all_blocks=1 00:05:33.310 --rc geninfo_unexecuted_blocks=1 00:05:33.310 00:05:33.310 ' 00:05:33.310 04:22:29 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:33.310 04:22:29 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:33.310 04:22:29 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:33.311 04:22:29 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:33.311 04:22:29 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:33.311 04:22:29 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:33.311 ************************************ 00:05:33.311 START TEST skip_rpc 00:05:33.311 ************************************ 00:05:33.311 04:22:29 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:05:33.311 04:22:29 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=57366 00:05:33.311 04:22:29 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:33.311 04:22:29 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:33.311 04:22:29 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:33.311 [2024-11-27 04:22:29.762170] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:05:33.311 [2024-11-27 04:22:29.762297] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57366 ] 00:05:33.590 [2024-11-27 04:22:29.921889] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:33.590 [2024-11-27 04:22:30.022378] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:38.852 04:22:34 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:38.852 04:22:34 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:05:38.852 04:22:34 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:38.852 04:22:34 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:05:38.852 04:22:34 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:38.852 04:22:34 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:05:38.852 04:22:34 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:38.852 04:22:34 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:05:38.852 04:22:34 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:38.852 04:22:34 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:38.852 04:22:34 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:38.852 04:22:34 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:05:38.852 04:22:34 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:38.852 04:22:34 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:38.852 04:22:34 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:38.852 04:22:34 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:38.852 04:22:34 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 57366 00:05:38.852 04:22:34 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 57366 ']' 00:05:38.852 04:22:34 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 57366 00:05:38.852 04:22:34 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:05:38.852 04:22:34 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:38.852 04:22:34 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57366 00:05:38.852 04:22:34 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:38.852 killing process with pid 57366 00:05:38.852 04:22:34 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:38.852 04:22:34 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57366' 00:05:38.852 04:22:34 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 57366 00:05:38.852 04:22:34 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 57366 00:05:39.416 00:05:39.416 real 0m6.202s 00:05:39.416 user 0m5.838s 00:05:39.416 sys 0m0.262s 00:05:39.416 04:22:35 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:39.416 ************************************ 00:05:39.416 END TEST skip_rpc 00:05:39.416 ************************************ 00:05:39.416 04:22:35 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:39.417 04:22:35 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:39.417 04:22:35 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:39.417 04:22:35 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:39.417 04:22:35 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:39.417 ************************************ 00:05:39.417 START TEST skip_rpc_with_json 00:05:39.417 ************************************ 00:05:39.417 04:22:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:05:39.417 04:22:35 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:39.417 04:22:35 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=57459 00:05:39.417 04:22:35 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:39.417 04:22:35 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 57459 00:05:39.417 04:22:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 57459 ']' 00:05:39.417 04:22:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:39.417 04:22:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:39.417 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:39.417 04:22:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:39.417 04:22:35 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:39.417 04:22:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:39.417 04:22:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:39.674 [2024-11-27 04:22:36.008108] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:05:39.674 [2024-11-27 04:22:36.008228] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57459 ] 00:05:39.674 [2024-11-27 04:22:36.153801] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:39.674 [2024-11-27 04:22:36.229037] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.245 04:22:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:40.245 04:22:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:05:40.245 04:22:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:40.245 04:22:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:40.245 04:22:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:40.245 [2024-11-27 04:22:36.799991] nvmf_rpc.c:2706:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:40.245 request: 00:05:40.245 { 00:05:40.245 "trtype": "tcp", 00:05:40.245 "method": "nvmf_get_transports", 00:05:40.245 "req_id": 1 00:05:40.245 } 00:05:40.245 Got JSON-RPC error response 00:05:40.245 response: 00:05:40.245 { 00:05:40.245 "code": -19, 00:05:40.245 "message": "No such device" 00:05:40.245 } 00:05:40.245 04:22:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:40.245 04:22:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:40.245 04:22:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:40.245 04:22:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:40.245 [2024-11-27 04:22:36.808077] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:40.245 04:22:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:40.245 04:22:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:40.245 04:22:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:40.245 04:22:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:40.506 04:22:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:40.506 04:22:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:40.506 { 00:05:40.506 "subsystems": [ 00:05:40.506 { 00:05:40.506 "subsystem": "fsdev", 00:05:40.506 "config": [ 00:05:40.506 { 00:05:40.506 "method": "fsdev_set_opts", 00:05:40.506 "params": { 00:05:40.506 "fsdev_io_pool_size": 65535, 00:05:40.506 "fsdev_io_cache_size": 256 00:05:40.506 } 00:05:40.506 } 00:05:40.506 ] 00:05:40.506 }, 00:05:40.506 { 00:05:40.506 "subsystem": "keyring", 00:05:40.506 "config": [] 00:05:40.506 }, 00:05:40.506 { 00:05:40.506 "subsystem": "iobuf", 00:05:40.506 "config": [ 00:05:40.506 { 00:05:40.506 "method": "iobuf_set_options", 00:05:40.506 "params": { 00:05:40.506 "small_pool_count": 8192, 00:05:40.506 "large_pool_count": 1024, 00:05:40.506 "small_bufsize": 8192, 00:05:40.506 "large_bufsize": 135168, 00:05:40.506 "enable_numa": false 00:05:40.506 } 00:05:40.506 } 00:05:40.506 ] 00:05:40.506 }, 00:05:40.506 { 00:05:40.506 "subsystem": "sock", 00:05:40.506 "config": [ 00:05:40.506 { 00:05:40.506 "method": "sock_set_default_impl", 00:05:40.506 "params": { 00:05:40.506 "impl_name": "posix" 00:05:40.506 } 00:05:40.506 }, 00:05:40.506 { 00:05:40.506 "method": "sock_impl_set_options", 00:05:40.506 "params": { 00:05:40.506 "impl_name": "ssl", 00:05:40.506 "recv_buf_size": 4096, 00:05:40.506 "send_buf_size": 4096, 00:05:40.506 "enable_recv_pipe": true, 00:05:40.506 "enable_quickack": false, 00:05:40.506 "enable_placement_id": 0, 00:05:40.506 "enable_zerocopy_send_server": true, 00:05:40.506 "enable_zerocopy_send_client": false, 00:05:40.506 "zerocopy_threshold": 0, 00:05:40.506 "tls_version": 0, 00:05:40.506 "enable_ktls": false 00:05:40.506 } 00:05:40.506 }, 00:05:40.506 { 00:05:40.506 "method": "sock_impl_set_options", 00:05:40.506 "params": { 00:05:40.506 "impl_name": "posix", 00:05:40.506 "recv_buf_size": 2097152, 00:05:40.506 "send_buf_size": 2097152, 00:05:40.506 "enable_recv_pipe": true, 00:05:40.506 "enable_quickack": false, 00:05:40.506 "enable_placement_id": 0, 00:05:40.506 "enable_zerocopy_send_server": true, 00:05:40.506 "enable_zerocopy_send_client": false, 00:05:40.506 "zerocopy_threshold": 0, 00:05:40.506 "tls_version": 0, 00:05:40.506 "enable_ktls": false 00:05:40.506 } 00:05:40.506 } 00:05:40.506 ] 00:05:40.506 }, 00:05:40.506 { 00:05:40.506 "subsystem": "vmd", 00:05:40.506 "config": [] 00:05:40.506 }, 00:05:40.506 { 00:05:40.506 "subsystem": "accel", 00:05:40.506 "config": [ 00:05:40.506 { 00:05:40.506 "method": "accel_set_options", 00:05:40.506 "params": { 00:05:40.506 "small_cache_size": 128, 00:05:40.506 "large_cache_size": 16, 00:05:40.506 "task_count": 2048, 00:05:40.506 "sequence_count": 2048, 00:05:40.506 "buf_count": 2048 00:05:40.506 } 00:05:40.506 } 00:05:40.506 ] 00:05:40.506 }, 00:05:40.506 { 00:05:40.506 "subsystem": "bdev", 00:05:40.506 "config": [ 00:05:40.506 { 00:05:40.506 "method": "bdev_set_options", 00:05:40.506 "params": { 00:05:40.506 "bdev_io_pool_size": 65535, 00:05:40.506 "bdev_io_cache_size": 256, 00:05:40.506 "bdev_auto_examine": true, 00:05:40.506 "iobuf_small_cache_size": 128, 00:05:40.506 "iobuf_large_cache_size": 16 00:05:40.506 } 00:05:40.506 }, 00:05:40.506 { 00:05:40.506 "method": "bdev_raid_set_options", 00:05:40.506 "params": { 00:05:40.506 "process_window_size_kb": 1024, 00:05:40.506 "process_max_bandwidth_mb_sec": 0 00:05:40.506 } 00:05:40.506 }, 00:05:40.506 { 00:05:40.506 "method": "bdev_iscsi_set_options", 00:05:40.506 "params": { 00:05:40.506 "timeout_sec": 30 00:05:40.506 } 00:05:40.506 }, 00:05:40.506 { 00:05:40.506 "method": "bdev_nvme_set_options", 00:05:40.506 "params": { 00:05:40.506 "action_on_timeout": "none", 00:05:40.506 "timeout_us": 0, 00:05:40.506 "timeout_admin_us": 0, 00:05:40.506 "keep_alive_timeout_ms": 10000, 00:05:40.506 "arbitration_burst": 0, 00:05:40.506 "low_priority_weight": 0, 00:05:40.506 "medium_priority_weight": 0, 00:05:40.506 "high_priority_weight": 0, 00:05:40.506 "nvme_adminq_poll_period_us": 10000, 00:05:40.506 "nvme_ioq_poll_period_us": 0, 00:05:40.506 "io_queue_requests": 0, 00:05:40.506 "delay_cmd_submit": true, 00:05:40.506 "transport_retry_count": 4, 00:05:40.506 "bdev_retry_count": 3, 00:05:40.506 "transport_ack_timeout": 0, 00:05:40.506 "ctrlr_loss_timeout_sec": 0, 00:05:40.506 "reconnect_delay_sec": 0, 00:05:40.506 "fast_io_fail_timeout_sec": 0, 00:05:40.506 "disable_auto_failback": false, 00:05:40.506 "generate_uuids": false, 00:05:40.506 "transport_tos": 0, 00:05:40.506 "nvme_error_stat": false, 00:05:40.506 "rdma_srq_size": 0, 00:05:40.506 "io_path_stat": false, 00:05:40.506 "allow_accel_sequence": false, 00:05:40.506 "rdma_max_cq_size": 0, 00:05:40.506 "rdma_cm_event_timeout_ms": 0, 00:05:40.506 "dhchap_digests": [ 00:05:40.506 "sha256", 00:05:40.506 "sha384", 00:05:40.506 "sha512" 00:05:40.506 ], 00:05:40.506 "dhchap_dhgroups": [ 00:05:40.506 "null", 00:05:40.506 "ffdhe2048", 00:05:40.506 "ffdhe3072", 00:05:40.506 "ffdhe4096", 00:05:40.506 "ffdhe6144", 00:05:40.506 "ffdhe8192" 00:05:40.506 ] 00:05:40.506 } 00:05:40.506 }, 00:05:40.506 { 00:05:40.506 "method": "bdev_nvme_set_hotplug", 00:05:40.506 "params": { 00:05:40.506 "period_us": 100000, 00:05:40.506 "enable": false 00:05:40.506 } 00:05:40.506 }, 00:05:40.506 { 00:05:40.506 "method": "bdev_wait_for_examine" 00:05:40.506 } 00:05:40.506 ] 00:05:40.506 }, 00:05:40.506 { 00:05:40.506 "subsystem": "scsi", 00:05:40.506 "config": null 00:05:40.506 }, 00:05:40.506 { 00:05:40.506 "subsystem": "scheduler", 00:05:40.506 "config": [ 00:05:40.506 { 00:05:40.506 "method": "framework_set_scheduler", 00:05:40.506 "params": { 00:05:40.506 "name": "static" 00:05:40.506 } 00:05:40.506 } 00:05:40.506 ] 00:05:40.506 }, 00:05:40.506 { 00:05:40.506 "subsystem": "vhost_scsi", 00:05:40.506 "config": [] 00:05:40.506 }, 00:05:40.506 { 00:05:40.506 "subsystem": "vhost_blk", 00:05:40.506 "config": [] 00:05:40.506 }, 00:05:40.506 { 00:05:40.506 "subsystem": "ublk", 00:05:40.506 "config": [] 00:05:40.506 }, 00:05:40.506 { 00:05:40.506 "subsystem": "nbd", 00:05:40.506 "config": [] 00:05:40.506 }, 00:05:40.506 { 00:05:40.506 "subsystem": "nvmf", 00:05:40.506 "config": [ 00:05:40.506 { 00:05:40.506 "method": "nvmf_set_config", 00:05:40.506 "params": { 00:05:40.506 "discovery_filter": "match_any", 00:05:40.506 "admin_cmd_passthru": { 00:05:40.506 "identify_ctrlr": false 00:05:40.506 }, 00:05:40.506 "dhchap_digests": [ 00:05:40.506 "sha256", 00:05:40.506 "sha384", 00:05:40.506 "sha512" 00:05:40.506 ], 00:05:40.506 "dhchap_dhgroups": [ 00:05:40.506 "null", 00:05:40.506 "ffdhe2048", 00:05:40.506 "ffdhe3072", 00:05:40.506 "ffdhe4096", 00:05:40.506 "ffdhe6144", 00:05:40.506 "ffdhe8192" 00:05:40.506 ] 00:05:40.506 } 00:05:40.506 }, 00:05:40.506 { 00:05:40.506 "method": "nvmf_set_max_subsystems", 00:05:40.506 "params": { 00:05:40.506 "max_subsystems": 1024 00:05:40.506 } 00:05:40.506 }, 00:05:40.506 { 00:05:40.506 "method": "nvmf_set_crdt", 00:05:40.506 "params": { 00:05:40.506 "crdt1": 0, 00:05:40.506 "crdt2": 0, 00:05:40.506 "crdt3": 0 00:05:40.506 } 00:05:40.506 }, 00:05:40.506 { 00:05:40.506 "method": "nvmf_create_transport", 00:05:40.506 "params": { 00:05:40.507 "trtype": "TCP", 00:05:40.507 "max_queue_depth": 128, 00:05:40.507 "max_io_qpairs_per_ctrlr": 127, 00:05:40.507 "in_capsule_data_size": 4096, 00:05:40.507 "max_io_size": 131072, 00:05:40.507 "io_unit_size": 131072, 00:05:40.507 "max_aq_depth": 128, 00:05:40.507 "num_shared_buffers": 511, 00:05:40.507 "buf_cache_size": 4294967295, 00:05:40.507 "dif_insert_or_strip": false, 00:05:40.507 "zcopy": false, 00:05:40.507 "c2h_success": true, 00:05:40.507 "sock_priority": 0, 00:05:40.507 "abort_timeout_sec": 1, 00:05:40.507 "ack_timeout": 0, 00:05:40.507 "data_wr_pool_size": 0 00:05:40.507 } 00:05:40.507 } 00:05:40.507 ] 00:05:40.507 }, 00:05:40.507 { 00:05:40.507 "subsystem": "iscsi", 00:05:40.507 "config": [ 00:05:40.507 { 00:05:40.507 "method": "iscsi_set_options", 00:05:40.507 "params": { 00:05:40.507 "node_base": "iqn.2016-06.io.spdk", 00:05:40.507 "max_sessions": 128, 00:05:40.507 "max_connections_per_session": 2, 00:05:40.507 "max_queue_depth": 64, 00:05:40.507 "default_time2wait": 2, 00:05:40.507 "default_time2retain": 20, 00:05:40.507 "first_burst_length": 8192, 00:05:40.507 "immediate_data": true, 00:05:40.507 "allow_duplicated_isid": false, 00:05:40.507 "error_recovery_level": 0, 00:05:40.507 "nop_timeout": 60, 00:05:40.507 "nop_in_interval": 30, 00:05:40.507 "disable_chap": false, 00:05:40.507 "require_chap": false, 00:05:40.507 "mutual_chap": false, 00:05:40.507 "chap_group": 0, 00:05:40.507 "max_large_datain_per_connection": 64, 00:05:40.507 "max_r2t_per_connection": 4, 00:05:40.507 "pdu_pool_size": 36864, 00:05:40.507 "immediate_data_pool_size": 16384, 00:05:40.507 "data_out_pool_size": 2048 00:05:40.507 } 00:05:40.507 } 00:05:40.507 ] 00:05:40.507 } 00:05:40.507 ] 00:05:40.507 } 00:05:40.507 04:22:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:40.507 04:22:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 57459 00:05:40.507 04:22:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 57459 ']' 00:05:40.507 04:22:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 57459 00:05:40.507 04:22:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:05:40.507 04:22:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:40.507 04:22:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57459 00:05:40.507 04:22:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:40.507 04:22:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:40.507 killing process with pid 57459 00:05:40.507 04:22:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57459' 00:05:40.507 04:22:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 57459 00:05:40.507 04:22:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 57459 00:05:41.884 04:22:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:41.884 04:22:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=57499 00:05:41.884 04:22:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:47.166 04:22:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 57499 00:05:47.166 04:22:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 57499 ']' 00:05:47.166 04:22:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 57499 00:05:47.166 04:22:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:05:47.166 04:22:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:47.166 04:22:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57499 00:05:47.166 04:22:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:47.166 killing process with pid 57499 00:05:47.166 04:22:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:47.166 04:22:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57499' 00:05:47.166 04:22:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 57499 00:05:47.166 04:22:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 57499 00:05:48.114 04:22:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:48.114 04:22:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:48.114 00:05:48.114 real 0m8.426s 00:05:48.114 user 0m8.069s 00:05:48.114 sys 0m0.530s 00:05:48.114 04:22:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:48.114 ************************************ 00:05:48.114 END TEST skip_rpc_with_json 00:05:48.114 ************************************ 00:05:48.114 04:22:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:48.114 04:22:44 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:48.114 04:22:44 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:48.114 04:22:44 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:48.114 04:22:44 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:48.114 ************************************ 00:05:48.114 START TEST skip_rpc_with_delay 00:05:48.114 ************************************ 00:05:48.114 04:22:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:05:48.114 04:22:44 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:48.114 04:22:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:05:48.114 04:22:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:48.114 04:22:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:48.114 04:22:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:48.114 04:22:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:48.114 04:22:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:48.114 04:22:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:48.114 04:22:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:48.114 04:22:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:48.114 04:22:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:48.114 04:22:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:48.114 [2024-11-27 04:22:44.499645] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:48.114 04:22:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:05:48.114 04:22:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:48.114 04:22:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:48.114 04:22:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:48.114 00:05:48.114 real 0m0.128s 00:05:48.114 user 0m0.064s 00:05:48.114 sys 0m0.063s 00:05:48.114 04:22:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:48.114 04:22:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:48.114 ************************************ 00:05:48.114 END TEST skip_rpc_with_delay 00:05:48.114 ************************************ 00:05:48.114 04:22:44 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:48.114 04:22:44 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:48.114 04:22:44 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:48.114 04:22:44 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:48.114 04:22:44 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:48.114 04:22:44 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:48.114 ************************************ 00:05:48.114 START TEST exit_on_failed_rpc_init 00:05:48.114 ************************************ 00:05:48.114 04:22:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:05:48.114 04:22:44 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=57620 00:05:48.114 04:22:44 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 57620 00:05:48.114 04:22:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 57620 ']' 00:05:48.114 04:22:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:48.114 04:22:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:48.114 04:22:44 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:48.114 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:48.114 04:22:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:48.114 04:22:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:48.114 04:22:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:48.114 [2024-11-27 04:22:44.681551] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:05:48.114 [2024-11-27 04:22:44.681696] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57620 ] 00:05:48.389 [2024-11-27 04:22:44.839207] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:48.389 [2024-11-27 04:22:44.959234] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.383 04:22:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:49.383 04:22:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:05:49.383 04:22:45 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:49.383 04:22:45 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:49.383 04:22:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:05:49.383 04:22:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:49.383 04:22:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:49.383 04:22:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:49.383 04:22:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:49.383 04:22:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:49.383 04:22:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:49.383 04:22:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:49.383 04:22:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:49.383 04:22:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:49.383 04:22:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:49.383 [2024-11-27 04:22:45.678309] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:05:49.383 [2024-11-27 04:22:45.678429] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57634 ] 00:05:49.383 [2024-11-27 04:22:45.839285] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:49.383 [2024-11-27 04:22:45.936618] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:49.383 [2024-11-27 04:22:45.936704] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:49.383 [2024-11-27 04:22:45.936717] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:49.383 [2024-11-27 04:22:45.936740] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:49.642 04:22:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:05:49.642 04:22:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:49.642 04:22:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:05:49.642 04:22:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:05:49.642 04:22:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:05:49.642 04:22:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:49.642 04:22:46 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:49.642 04:22:46 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 57620 00:05:49.642 04:22:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 57620 ']' 00:05:49.642 04:22:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 57620 00:05:49.642 04:22:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:05:49.642 04:22:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:49.642 04:22:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57620 00:05:49.642 04:22:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:49.643 04:22:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:49.643 04:22:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57620' 00:05:49.643 killing process with pid 57620 00:05:49.643 04:22:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 57620 00:05:49.643 04:22:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 57620 00:05:51.023 00:05:51.023 real 0m2.899s 00:05:51.023 user 0m3.177s 00:05:51.023 sys 0m0.447s 00:05:51.023 04:22:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:51.023 ************************************ 00:05:51.023 END TEST exit_on_failed_rpc_init 00:05:51.023 ************************************ 00:05:51.023 04:22:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:51.023 04:22:47 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:51.023 00:05:51.023 real 0m18.022s 00:05:51.023 user 0m17.287s 00:05:51.023 sys 0m1.480s 00:05:51.023 04:22:47 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:51.023 ************************************ 00:05:51.023 END TEST skip_rpc 00:05:51.023 ************************************ 00:05:51.023 04:22:47 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:51.023 04:22:47 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:51.023 04:22:47 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:51.023 04:22:47 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:51.023 04:22:47 -- common/autotest_common.sh@10 -- # set +x 00:05:51.023 ************************************ 00:05:51.023 START TEST rpc_client 00:05:51.023 ************************************ 00:05:51.023 04:22:47 rpc_client -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:51.282 * Looking for test storage... 00:05:51.282 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:05:51.282 04:22:47 rpc_client -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:51.282 04:22:47 rpc_client -- common/autotest_common.sh@1693 -- # lcov --version 00:05:51.282 04:22:47 rpc_client -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:51.282 04:22:47 rpc_client -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:51.282 04:22:47 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:51.282 04:22:47 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:51.282 04:22:47 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:51.282 04:22:47 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:05:51.282 04:22:47 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:05:51.283 04:22:47 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:05:51.283 04:22:47 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:05:51.283 04:22:47 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:05:51.283 04:22:47 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:05:51.283 04:22:47 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:05:51.283 04:22:47 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:51.283 04:22:47 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:05:51.283 04:22:47 rpc_client -- scripts/common.sh@345 -- # : 1 00:05:51.283 04:22:47 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:51.283 04:22:47 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:51.283 04:22:47 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:05:51.283 04:22:47 rpc_client -- scripts/common.sh@353 -- # local d=1 00:05:51.283 04:22:47 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:51.283 04:22:47 rpc_client -- scripts/common.sh@355 -- # echo 1 00:05:51.283 04:22:47 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:05:51.283 04:22:47 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:05:51.283 04:22:47 rpc_client -- scripts/common.sh@353 -- # local d=2 00:05:51.283 04:22:47 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:51.283 04:22:47 rpc_client -- scripts/common.sh@355 -- # echo 2 00:05:51.283 04:22:47 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:05:51.283 04:22:47 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:51.283 04:22:47 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:51.283 04:22:47 rpc_client -- scripts/common.sh@368 -- # return 0 00:05:51.283 04:22:47 rpc_client -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:51.283 04:22:47 rpc_client -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:51.283 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:51.283 --rc genhtml_branch_coverage=1 00:05:51.283 --rc genhtml_function_coverage=1 00:05:51.283 --rc genhtml_legend=1 00:05:51.283 --rc geninfo_all_blocks=1 00:05:51.283 --rc geninfo_unexecuted_blocks=1 00:05:51.283 00:05:51.283 ' 00:05:51.283 04:22:47 rpc_client -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:51.283 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:51.283 --rc genhtml_branch_coverage=1 00:05:51.283 --rc genhtml_function_coverage=1 00:05:51.283 --rc genhtml_legend=1 00:05:51.283 --rc geninfo_all_blocks=1 00:05:51.283 --rc geninfo_unexecuted_blocks=1 00:05:51.283 00:05:51.283 ' 00:05:51.283 04:22:47 rpc_client -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:51.283 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:51.283 --rc genhtml_branch_coverage=1 00:05:51.283 --rc genhtml_function_coverage=1 00:05:51.283 --rc genhtml_legend=1 00:05:51.283 --rc geninfo_all_blocks=1 00:05:51.283 --rc geninfo_unexecuted_blocks=1 00:05:51.283 00:05:51.283 ' 00:05:51.283 04:22:47 rpc_client -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:51.283 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:51.283 --rc genhtml_branch_coverage=1 00:05:51.283 --rc genhtml_function_coverage=1 00:05:51.283 --rc genhtml_legend=1 00:05:51.283 --rc geninfo_all_blocks=1 00:05:51.283 --rc geninfo_unexecuted_blocks=1 00:05:51.283 00:05:51.283 ' 00:05:51.283 04:22:47 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:05:51.283 OK 00:05:51.283 04:22:47 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:51.283 00:05:51.283 real 0m0.178s 00:05:51.283 user 0m0.102s 00:05:51.283 sys 0m0.080s 00:05:51.283 ************************************ 00:05:51.283 END TEST rpc_client 00:05:51.283 04:22:47 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:51.283 04:22:47 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:51.283 ************************************ 00:05:51.283 04:22:47 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:51.283 04:22:47 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:51.283 04:22:47 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:51.283 04:22:47 -- common/autotest_common.sh@10 -- # set +x 00:05:51.283 ************************************ 00:05:51.283 START TEST json_config 00:05:51.283 ************************************ 00:05:51.283 04:22:47 json_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:51.543 04:22:47 json_config -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:51.543 04:22:47 json_config -- common/autotest_common.sh@1693 -- # lcov --version 00:05:51.543 04:22:47 json_config -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:51.543 04:22:47 json_config -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:51.543 04:22:47 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:51.543 04:22:47 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:51.543 04:22:47 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:51.543 04:22:47 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:05:51.543 04:22:47 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:05:51.543 04:22:47 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:05:51.543 04:22:47 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:05:51.543 04:22:47 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:05:51.543 04:22:47 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:05:51.543 04:22:47 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:05:51.543 04:22:47 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:51.543 04:22:47 json_config -- scripts/common.sh@344 -- # case "$op" in 00:05:51.543 04:22:47 json_config -- scripts/common.sh@345 -- # : 1 00:05:51.543 04:22:47 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:51.543 04:22:47 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:51.543 04:22:47 json_config -- scripts/common.sh@365 -- # decimal 1 00:05:51.543 04:22:47 json_config -- scripts/common.sh@353 -- # local d=1 00:05:51.543 04:22:47 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:51.543 04:22:47 json_config -- scripts/common.sh@355 -- # echo 1 00:05:51.543 04:22:47 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:05:51.543 04:22:47 json_config -- scripts/common.sh@366 -- # decimal 2 00:05:51.543 04:22:47 json_config -- scripts/common.sh@353 -- # local d=2 00:05:51.543 04:22:47 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:51.543 04:22:47 json_config -- scripts/common.sh@355 -- # echo 2 00:05:51.543 04:22:47 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:05:51.543 04:22:47 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:51.543 04:22:47 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:51.543 04:22:47 json_config -- scripts/common.sh@368 -- # return 0 00:05:51.543 04:22:47 json_config -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:51.543 04:22:47 json_config -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:51.543 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:51.543 --rc genhtml_branch_coverage=1 00:05:51.543 --rc genhtml_function_coverage=1 00:05:51.543 --rc genhtml_legend=1 00:05:51.543 --rc geninfo_all_blocks=1 00:05:51.543 --rc geninfo_unexecuted_blocks=1 00:05:51.543 00:05:51.543 ' 00:05:51.543 04:22:47 json_config -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:51.543 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:51.543 --rc genhtml_branch_coverage=1 00:05:51.543 --rc genhtml_function_coverage=1 00:05:51.543 --rc genhtml_legend=1 00:05:51.543 --rc geninfo_all_blocks=1 00:05:51.543 --rc geninfo_unexecuted_blocks=1 00:05:51.543 00:05:51.543 ' 00:05:51.543 04:22:47 json_config -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:51.543 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:51.543 --rc genhtml_branch_coverage=1 00:05:51.543 --rc genhtml_function_coverage=1 00:05:51.543 --rc genhtml_legend=1 00:05:51.543 --rc geninfo_all_blocks=1 00:05:51.543 --rc geninfo_unexecuted_blocks=1 00:05:51.543 00:05:51.543 ' 00:05:51.543 04:22:47 json_config -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:51.543 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:51.543 --rc genhtml_branch_coverage=1 00:05:51.543 --rc genhtml_function_coverage=1 00:05:51.543 --rc genhtml_legend=1 00:05:51.543 --rc geninfo_all_blocks=1 00:05:51.543 --rc geninfo_unexecuted_blocks=1 00:05:51.543 00:05:51.543 ' 00:05:51.543 04:22:47 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:51.543 04:22:47 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:51.543 04:22:47 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:51.543 04:22:47 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:51.543 04:22:47 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:51.543 04:22:47 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:51.543 04:22:47 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:51.543 04:22:47 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:51.543 04:22:47 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:51.543 04:22:47 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:51.543 04:22:47 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:51.543 04:22:47 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:51.543 04:22:47 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:13189ee1-9cae-47b4-9e20-44b2397fdd91 00:05:51.543 04:22:47 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=13189ee1-9cae-47b4-9e20-44b2397fdd91 00:05:51.543 04:22:47 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:51.543 04:22:47 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:51.543 04:22:47 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:51.543 04:22:47 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:51.543 04:22:47 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:51.543 04:22:47 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:05:51.543 04:22:47 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:51.543 04:22:47 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:51.543 04:22:47 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:51.543 04:22:47 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:51.543 04:22:47 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:51.543 04:22:47 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:51.543 04:22:47 json_config -- paths/export.sh@5 -- # export PATH 00:05:51.543 04:22:47 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:51.543 04:22:47 json_config -- nvmf/common.sh@51 -- # : 0 00:05:51.543 04:22:47 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:51.543 04:22:47 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:51.543 04:22:47 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:51.543 04:22:47 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:51.543 04:22:47 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:51.543 04:22:47 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:51.543 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:51.543 04:22:47 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:51.543 04:22:47 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:51.543 04:22:47 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:51.543 04:22:47 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:51.543 04:22:47 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:51.543 04:22:47 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:51.543 WARNING: No tests are enabled so not running JSON configuration tests 00:05:51.544 04:22:47 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:51.544 04:22:47 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:51.544 04:22:47 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:05:51.544 04:22:47 json_config -- json_config/json_config.sh@28 -- # exit 0 00:05:51.544 00:05:51.544 real 0m0.146s 00:05:51.544 user 0m0.086s 00:05:51.544 sys 0m0.061s 00:05:51.544 04:22:47 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:51.544 04:22:47 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:51.544 ************************************ 00:05:51.544 END TEST json_config 00:05:51.544 ************************************ 00:05:51.544 04:22:48 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:51.544 04:22:48 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:51.544 04:22:48 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:51.544 04:22:48 -- common/autotest_common.sh@10 -- # set +x 00:05:51.544 ************************************ 00:05:51.544 START TEST json_config_extra_key 00:05:51.544 ************************************ 00:05:51.544 04:22:48 json_config_extra_key -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:51.544 04:22:48 json_config_extra_key -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:51.544 04:22:48 json_config_extra_key -- common/autotest_common.sh@1693 -- # lcov --version 00:05:51.544 04:22:48 json_config_extra_key -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:51.804 04:22:48 json_config_extra_key -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:51.804 04:22:48 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:51.804 04:22:48 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:51.804 04:22:48 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:51.804 04:22:48 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:05:51.804 04:22:48 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:05:51.804 04:22:48 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:05:51.804 04:22:48 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:05:51.804 04:22:48 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:05:51.804 04:22:48 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:05:51.804 04:22:48 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:05:51.804 04:22:48 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:51.804 04:22:48 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:05:51.804 04:22:48 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:05:51.804 04:22:48 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:51.804 04:22:48 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:51.804 04:22:48 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:05:51.804 04:22:48 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:05:51.804 04:22:48 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:51.804 04:22:48 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:05:51.804 04:22:48 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:05:51.804 04:22:48 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:05:51.804 04:22:48 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:05:51.804 04:22:48 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:51.804 04:22:48 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:05:51.804 04:22:48 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:05:51.804 04:22:48 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:51.804 04:22:48 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:51.804 04:22:48 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:05:51.804 04:22:48 json_config_extra_key -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:51.804 04:22:48 json_config_extra_key -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:51.804 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:51.804 --rc genhtml_branch_coverage=1 00:05:51.804 --rc genhtml_function_coverage=1 00:05:51.804 --rc genhtml_legend=1 00:05:51.804 --rc geninfo_all_blocks=1 00:05:51.804 --rc geninfo_unexecuted_blocks=1 00:05:51.804 00:05:51.804 ' 00:05:51.804 04:22:48 json_config_extra_key -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:51.804 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:51.804 --rc genhtml_branch_coverage=1 00:05:51.804 --rc genhtml_function_coverage=1 00:05:51.804 --rc genhtml_legend=1 00:05:51.804 --rc geninfo_all_blocks=1 00:05:51.804 --rc geninfo_unexecuted_blocks=1 00:05:51.804 00:05:51.804 ' 00:05:51.804 04:22:48 json_config_extra_key -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:51.804 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:51.804 --rc genhtml_branch_coverage=1 00:05:51.804 --rc genhtml_function_coverage=1 00:05:51.804 --rc genhtml_legend=1 00:05:51.804 --rc geninfo_all_blocks=1 00:05:51.804 --rc geninfo_unexecuted_blocks=1 00:05:51.804 00:05:51.804 ' 00:05:51.804 04:22:48 json_config_extra_key -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:51.804 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:51.804 --rc genhtml_branch_coverage=1 00:05:51.804 --rc genhtml_function_coverage=1 00:05:51.804 --rc genhtml_legend=1 00:05:51.804 --rc geninfo_all_blocks=1 00:05:51.804 --rc geninfo_unexecuted_blocks=1 00:05:51.804 00:05:51.804 ' 00:05:51.804 04:22:48 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:51.804 04:22:48 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:51.804 04:22:48 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:51.804 04:22:48 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:51.804 04:22:48 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:51.804 04:22:48 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:51.804 04:22:48 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:51.804 04:22:48 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:51.804 04:22:48 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:51.804 04:22:48 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:51.804 04:22:48 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:51.804 04:22:48 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:51.804 04:22:48 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:13189ee1-9cae-47b4-9e20-44b2397fdd91 00:05:51.805 04:22:48 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=13189ee1-9cae-47b4-9e20-44b2397fdd91 00:05:51.805 04:22:48 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:51.805 04:22:48 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:51.805 04:22:48 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:51.805 04:22:48 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:51.805 04:22:48 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:51.805 04:22:48 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:05:51.805 04:22:48 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:51.805 04:22:48 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:51.805 04:22:48 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:51.805 04:22:48 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:51.805 04:22:48 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:51.805 04:22:48 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:51.805 04:22:48 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:51.805 04:22:48 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:51.805 04:22:48 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:05:51.805 04:22:48 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:51.805 04:22:48 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:51.805 04:22:48 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:51.805 04:22:48 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:51.805 04:22:48 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:51.805 04:22:48 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:51.805 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:51.805 04:22:48 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:51.805 04:22:48 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:51.805 04:22:48 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:51.805 04:22:48 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:51.805 04:22:48 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:51.805 04:22:48 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:51.805 04:22:48 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:51.805 04:22:48 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:51.805 04:22:48 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:51.805 04:22:48 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:51.805 04:22:48 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:05:51.805 04:22:48 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:51.805 04:22:48 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:51.805 INFO: launching applications... 00:05:51.805 04:22:48 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:51.805 04:22:48 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:51.805 04:22:48 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:51.805 04:22:48 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:51.805 04:22:48 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:51.805 04:22:48 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:51.805 04:22:48 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:51.805 04:22:48 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:51.805 04:22:48 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:51.805 04:22:48 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=57827 00:05:51.805 Waiting for target to run... 00:05:51.805 04:22:48 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:51.805 04:22:48 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 57827 /var/tmp/spdk_tgt.sock 00:05:51.805 04:22:48 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 57827 ']' 00:05:51.805 04:22:48 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:51.805 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:51.805 04:22:48 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:51.805 04:22:48 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:51.805 04:22:48 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:51.805 04:22:48 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:51.805 04:22:48 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:51.805 [2024-11-27 04:22:48.259198] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:05:51.805 [2024-11-27 04:22:48.259352] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57827 ] 00:05:52.375 [2024-11-27 04:22:48.683351] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:52.375 [2024-11-27 04:22:48.797803] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.946 04:22:49 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:52.946 00:05:52.946 04:22:49 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:05:52.946 04:22:49 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:52.946 INFO: shutting down applications... 00:05:52.946 04:22:49 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:52.946 04:22:49 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:52.946 04:22:49 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:52.946 04:22:49 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:52.946 04:22:49 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 57827 ]] 00:05:52.946 04:22:49 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 57827 00:05:52.946 04:22:49 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:52.946 04:22:49 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:52.946 04:22:49 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57827 00:05:52.946 04:22:49 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:53.512 04:22:49 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:53.512 04:22:49 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:53.512 04:22:49 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57827 00:05:53.512 04:22:49 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:53.769 04:22:50 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:53.769 04:22:50 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:53.769 04:22:50 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57827 00:05:53.769 04:22:50 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:54.338 04:22:50 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:54.338 04:22:50 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:54.338 04:22:50 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57827 00:05:54.338 04:22:50 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:54.908 04:22:51 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:54.908 04:22:51 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:54.908 04:22:51 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57827 00:05:54.908 04:22:51 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:54.908 04:22:51 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:54.908 04:22:51 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:54.908 SPDK target shutdown done 00:05:54.908 04:22:51 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:54.908 Success 00:05:54.908 04:22:51 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:54.908 00:05:54.908 real 0m3.321s 00:05:54.908 user 0m2.857s 00:05:54.908 sys 0m0.542s 00:05:54.908 04:22:51 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:54.908 ************************************ 00:05:54.908 END TEST json_config_extra_key 00:05:54.908 ************************************ 00:05:54.908 04:22:51 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:54.908 04:22:51 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:54.909 04:22:51 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:54.909 04:22:51 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:54.909 04:22:51 -- common/autotest_common.sh@10 -- # set +x 00:05:54.909 ************************************ 00:05:54.909 START TEST alias_rpc 00:05:54.909 ************************************ 00:05:54.909 04:22:51 alias_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:54.909 * Looking for test storage... 00:05:54.909 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:05:54.909 04:22:51 alias_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:54.909 04:22:51 alias_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:05:54.909 04:22:51 alias_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:55.168 04:22:51 alias_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:55.168 04:22:51 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:55.168 04:22:51 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:55.168 04:22:51 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:55.168 04:22:51 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:55.168 04:22:51 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:55.168 04:22:51 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:55.168 04:22:51 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:55.168 04:22:51 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:55.168 04:22:51 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:55.168 04:22:51 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:55.168 04:22:51 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:55.168 04:22:51 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:55.168 04:22:51 alias_rpc -- scripts/common.sh@345 -- # : 1 00:05:55.168 04:22:51 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:55.168 04:22:51 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:55.168 04:22:51 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:55.168 04:22:51 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:05:55.168 04:22:51 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:55.168 04:22:51 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:05:55.168 04:22:51 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:55.168 04:22:51 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:55.168 04:22:51 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:05:55.168 04:22:51 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:55.168 04:22:51 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:05:55.168 04:22:51 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:55.168 04:22:51 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:55.168 04:22:51 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:55.168 04:22:51 alias_rpc -- scripts/common.sh@368 -- # return 0 00:05:55.168 04:22:51 alias_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:55.168 04:22:51 alias_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:55.168 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:55.168 --rc genhtml_branch_coverage=1 00:05:55.168 --rc genhtml_function_coverage=1 00:05:55.168 --rc genhtml_legend=1 00:05:55.168 --rc geninfo_all_blocks=1 00:05:55.168 --rc geninfo_unexecuted_blocks=1 00:05:55.168 00:05:55.168 ' 00:05:55.168 04:22:51 alias_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:55.168 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:55.168 --rc genhtml_branch_coverage=1 00:05:55.168 --rc genhtml_function_coverage=1 00:05:55.168 --rc genhtml_legend=1 00:05:55.168 --rc geninfo_all_blocks=1 00:05:55.168 --rc geninfo_unexecuted_blocks=1 00:05:55.168 00:05:55.168 ' 00:05:55.168 04:22:51 alias_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:55.168 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:55.168 --rc genhtml_branch_coverage=1 00:05:55.168 --rc genhtml_function_coverage=1 00:05:55.168 --rc genhtml_legend=1 00:05:55.168 --rc geninfo_all_blocks=1 00:05:55.168 --rc geninfo_unexecuted_blocks=1 00:05:55.168 00:05:55.168 ' 00:05:55.168 04:22:51 alias_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:55.168 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:55.168 --rc genhtml_branch_coverage=1 00:05:55.168 --rc genhtml_function_coverage=1 00:05:55.168 --rc genhtml_legend=1 00:05:55.168 --rc geninfo_all_blocks=1 00:05:55.168 --rc geninfo_unexecuted_blocks=1 00:05:55.168 00:05:55.168 ' 00:05:55.169 04:22:51 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:55.169 04:22:51 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=57926 00:05:55.169 04:22:51 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 57926 00:05:55.169 04:22:51 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:55.169 04:22:51 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 57926 ']' 00:05:55.169 04:22:51 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:55.169 04:22:51 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:55.169 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:55.169 04:22:51 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:55.169 04:22:51 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:55.169 04:22:51 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:55.169 [2024-11-27 04:22:51.621182] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:05:55.169 [2024-11-27 04:22:51.621309] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57926 ] 00:05:55.429 [2024-11-27 04:22:51.778118] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:55.429 [2024-11-27 04:22:51.900129] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.372 04:22:52 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:56.372 04:22:52 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:56.372 04:22:52 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:05:56.372 04:22:52 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 57926 00:05:56.372 04:22:52 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 57926 ']' 00:05:56.372 04:22:52 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 57926 00:05:56.372 04:22:52 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:05:56.372 04:22:52 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:56.372 04:22:52 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57926 00:05:56.372 04:22:52 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:56.372 killing process with pid 57926 00:05:56.372 04:22:52 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:56.372 04:22:52 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57926' 00:05:56.372 04:22:52 alias_rpc -- common/autotest_common.sh@973 -- # kill 57926 00:05:56.372 04:22:52 alias_rpc -- common/autotest_common.sh@978 -- # wait 57926 00:05:58.281 00:05:58.281 real 0m3.060s 00:05:58.281 user 0m3.111s 00:05:58.281 sys 0m0.481s 00:05:58.281 04:22:54 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:58.281 04:22:54 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:58.281 ************************************ 00:05:58.281 END TEST alias_rpc 00:05:58.281 ************************************ 00:05:58.281 04:22:54 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:05:58.281 04:22:54 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:58.281 04:22:54 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:58.281 04:22:54 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:58.281 04:22:54 -- common/autotest_common.sh@10 -- # set +x 00:05:58.281 ************************************ 00:05:58.281 START TEST spdkcli_tcp 00:05:58.281 ************************************ 00:05:58.281 04:22:54 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:58.281 * Looking for test storage... 00:05:58.281 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:05:58.281 04:22:54 spdkcli_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:58.281 04:22:54 spdkcli_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:58.281 04:22:54 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:05:58.281 04:22:54 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:58.281 04:22:54 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:58.281 04:22:54 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:58.281 04:22:54 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:58.281 04:22:54 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:05:58.281 04:22:54 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:05:58.281 04:22:54 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:05:58.281 04:22:54 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:05:58.281 04:22:54 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:05:58.281 04:22:54 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:05:58.281 04:22:54 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:05:58.281 04:22:54 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:58.281 04:22:54 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:05:58.281 04:22:54 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:05:58.281 04:22:54 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:58.281 04:22:54 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:58.281 04:22:54 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:05:58.281 04:22:54 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:05:58.281 04:22:54 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:58.281 04:22:54 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:05:58.282 04:22:54 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:05:58.282 04:22:54 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:05:58.282 04:22:54 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:05:58.282 04:22:54 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:58.282 04:22:54 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:05:58.282 04:22:54 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:05:58.282 04:22:54 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:58.282 04:22:54 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:58.282 04:22:54 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:05:58.282 04:22:54 spdkcli_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:58.282 04:22:54 spdkcli_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:58.282 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:58.282 --rc genhtml_branch_coverage=1 00:05:58.282 --rc genhtml_function_coverage=1 00:05:58.282 --rc genhtml_legend=1 00:05:58.282 --rc geninfo_all_blocks=1 00:05:58.282 --rc geninfo_unexecuted_blocks=1 00:05:58.282 00:05:58.282 ' 00:05:58.282 04:22:54 spdkcli_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:58.282 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:58.282 --rc genhtml_branch_coverage=1 00:05:58.282 --rc genhtml_function_coverage=1 00:05:58.282 --rc genhtml_legend=1 00:05:58.282 --rc geninfo_all_blocks=1 00:05:58.282 --rc geninfo_unexecuted_blocks=1 00:05:58.282 00:05:58.282 ' 00:05:58.282 04:22:54 spdkcli_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:58.282 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:58.282 --rc genhtml_branch_coverage=1 00:05:58.282 --rc genhtml_function_coverage=1 00:05:58.282 --rc genhtml_legend=1 00:05:58.282 --rc geninfo_all_blocks=1 00:05:58.282 --rc geninfo_unexecuted_blocks=1 00:05:58.282 00:05:58.282 ' 00:05:58.282 04:22:54 spdkcli_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:58.282 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:58.282 --rc genhtml_branch_coverage=1 00:05:58.282 --rc genhtml_function_coverage=1 00:05:58.282 --rc genhtml_legend=1 00:05:58.282 --rc geninfo_all_blocks=1 00:05:58.282 --rc geninfo_unexecuted_blocks=1 00:05:58.282 00:05:58.282 ' 00:05:58.282 04:22:54 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:05:58.282 04:22:54 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:05:58.282 04:22:54 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:05:58.282 04:22:54 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:58.282 04:22:54 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:58.282 04:22:54 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:58.282 04:22:54 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:58.282 04:22:54 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:58.282 04:22:54 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:58.282 04:22:54 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=58022 00:05:58.282 04:22:54 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 58022 00:05:58.282 04:22:54 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 58022 ']' 00:05:58.282 04:22:54 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:58.282 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:58.282 04:22:54 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:58.282 04:22:54 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:58.282 04:22:54 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:58.282 04:22:54 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:58.282 04:22:54 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:58.282 [2024-11-27 04:22:54.720194] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:05:58.282 [2024-11-27 04:22:54.720309] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58022 ] 00:05:58.542 [2024-11-27 04:22:54.875912] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:58.542 [2024-11-27 04:22:54.953605] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.542 [2024-11-27 04:22:54.953626] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:59.114 04:22:55 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:59.114 04:22:55 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:05:59.114 04:22:55 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=58039 00:05:59.114 04:22:55 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:59.114 04:22:55 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:59.114 [ 00:05:59.114 "bdev_malloc_delete", 00:05:59.114 "bdev_malloc_create", 00:05:59.114 "bdev_null_resize", 00:05:59.114 "bdev_null_delete", 00:05:59.114 "bdev_null_create", 00:05:59.114 "bdev_nvme_cuse_unregister", 00:05:59.114 "bdev_nvme_cuse_register", 00:05:59.114 "bdev_opal_new_user", 00:05:59.114 "bdev_opal_set_lock_state", 00:05:59.114 "bdev_opal_delete", 00:05:59.114 "bdev_opal_get_info", 00:05:59.114 "bdev_opal_create", 00:05:59.114 "bdev_nvme_opal_revert", 00:05:59.114 "bdev_nvme_opal_init", 00:05:59.114 "bdev_nvme_send_cmd", 00:05:59.114 "bdev_nvme_set_keys", 00:05:59.114 "bdev_nvme_get_path_iostat", 00:05:59.114 "bdev_nvme_get_mdns_discovery_info", 00:05:59.114 "bdev_nvme_stop_mdns_discovery", 00:05:59.114 "bdev_nvme_start_mdns_discovery", 00:05:59.114 "bdev_nvme_set_multipath_policy", 00:05:59.114 "bdev_nvme_set_preferred_path", 00:05:59.114 "bdev_nvme_get_io_paths", 00:05:59.114 "bdev_nvme_remove_error_injection", 00:05:59.114 "bdev_nvme_add_error_injection", 00:05:59.114 "bdev_nvme_get_discovery_info", 00:05:59.114 "bdev_nvme_stop_discovery", 00:05:59.114 "bdev_nvme_start_discovery", 00:05:59.114 "bdev_nvme_get_controller_health_info", 00:05:59.114 "bdev_nvme_disable_controller", 00:05:59.114 "bdev_nvme_enable_controller", 00:05:59.114 "bdev_nvme_reset_controller", 00:05:59.114 "bdev_nvme_get_transport_statistics", 00:05:59.114 "bdev_nvme_apply_firmware", 00:05:59.114 "bdev_nvme_detach_controller", 00:05:59.114 "bdev_nvme_get_controllers", 00:05:59.114 "bdev_nvme_attach_controller", 00:05:59.114 "bdev_nvme_set_hotplug", 00:05:59.114 "bdev_nvme_set_options", 00:05:59.114 "bdev_passthru_delete", 00:05:59.114 "bdev_passthru_create", 00:05:59.114 "bdev_lvol_set_parent_bdev", 00:05:59.114 "bdev_lvol_set_parent", 00:05:59.114 "bdev_lvol_check_shallow_copy", 00:05:59.114 "bdev_lvol_start_shallow_copy", 00:05:59.114 "bdev_lvol_grow_lvstore", 00:05:59.114 "bdev_lvol_get_lvols", 00:05:59.114 "bdev_lvol_get_lvstores", 00:05:59.114 "bdev_lvol_delete", 00:05:59.114 "bdev_lvol_set_read_only", 00:05:59.114 "bdev_lvol_resize", 00:05:59.114 "bdev_lvol_decouple_parent", 00:05:59.114 "bdev_lvol_inflate", 00:05:59.114 "bdev_lvol_rename", 00:05:59.114 "bdev_lvol_clone_bdev", 00:05:59.114 "bdev_lvol_clone", 00:05:59.114 "bdev_lvol_snapshot", 00:05:59.114 "bdev_lvol_create", 00:05:59.114 "bdev_lvol_delete_lvstore", 00:05:59.114 "bdev_lvol_rename_lvstore", 00:05:59.114 "bdev_lvol_create_lvstore", 00:05:59.114 "bdev_raid_set_options", 00:05:59.114 "bdev_raid_remove_base_bdev", 00:05:59.114 "bdev_raid_add_base_bdev", 00:05:59.114 "bdev_raid_delete", 00:05:59.114 "bdev_raid_create", 00:05:59.114 "bdev_raid_get_bdevs", 00:05:59.114 "bdev_error_inject_error", 00:05:59.114 "bdev_error_delete", 00:05:59.114 "bdev_error_create", 00:05:59.114 "bdev_split_delete", 00:05:59.114 "bdev_split_create", 00:05:59.114 "bdev_delay_delete", 00:05:59.114 "bdev_delay_create", 00:05:59.114 "bdev_delay_update_latency", 00:05:59.114 "bdev_zone_block_delete", 00:05:59.114 "bdev_zone_block_create", 00:05:59.114 "blobfs_create", 00:05:59.114 "blobfs_detect", 00:05:59.114 "blobfs_set_cache_size", 00:05:59.114 "bdev_xnvme_delete", 00:05:59.114 "bdev_xnvme_create", 00:05:59.114 "bdev_aio_delete", 00:05:59.114 "bdev_aio_rescan", 00:05:59.114 "bdev_aio_create", 00:05:59.114 "bdev_ftl_set_property", 00:05:59.114 "bdev_ftl_get_properties", 00:05:59.114 "bdev_ftl_get_stats", 00:05:59.114 "bdev_ftl_unmap", 00:05:59.114 "bdev_ftl_unload", 00:05:59.114 "bdev_ftl_delete", 00:05:59.114 "bdev_ftl_load", 00:05:59.114 "bdev_ftl_create", 00:05:59.114 "bdev_virtio_attach_controller", 00:05:59.114 "bdev_virtio_scsi_get_devices", 00:05:59.114 "bdev_virtio_detach_controller", 00:05:59.114 "bdev_virtio_blk_set_hotplug", 00:05:59.114 "bdev_iscsi_delete", 00:05:59.114 "bdev_iscsi_create", 00:05:59.114 "bdev_iscsi_set_options", 00:05:59.114 "accel_error_inject_error", 00:05:59.114 "ioat_scan_accel_module", 00:05:59.114 "dsa_scan_accel_module", 00:05:59.114 "iaa_scan_accel_module", 00:05:59.114 "keyring_file_remove_key", 00:05:59.114 "keyring_file_add_key", 00:05:59.114 "keyring_linux_set_options", 00:05:59.114 "fsdev_aio_delete", 00:05:59.114 "fsdev_aio_create", 00:05:59.114 "iscsi_get_histogram", 00:05:59.114 "iscsi_enable_histogram", 00:05:59.114 "iscsi_set_options", 00:05:59.114 "iscsi_get_auth_groups", 00:05:59.114 "iscsi_auth_group_remove_secret", 00:05:59.114 "iscsi_auth_group_add_secret", 00:05:59.114 "iscsi_delete_auth_group", 00:05:59.114 "iscsi_create_auth_group", 00:05:59.114 "iscsi_set_discovery_auth", 00:05:59.114 "iscsi_get_options", 00:05:59.114 "iscsi_target_node_request_logout", 00:05:59.114 "iscsi_target_node_set_redirect", 00:05:59.114 "iscsi_target_node_set_auth", 00:05:59.114 "iscsi_target_node_add_lun", 00:05:59.114 "iscsi_get_stats", 00:05:59.114 "iscsi_get_connections", 00:05:59.114 "iscsi_portal_group_set_auth", 00:05:59.114 "iscsi_start_portal_group", 00:05:59.114 "iscsi_delete_portal_group", 00:05:59.114 "iscsi_create_portal_group", 00:05:59.114 "iscsi_get_portal_groups", 00:05:59.114 "iscsi_delete_target_node", 00:05:59.114 "iscsi_target_node_remove_pg_ig_maps", 00:05:59.114 "iscsi_target_node_add_pg_ig_maps", 00:05:59.114 "iscsi_create_target_node", 00:05:59.114 "iscsi_get_target_nodes", 00:05:59.114 "iscsi_delete_initiator_group", 00:05:59.114 "iscsi_initiator_group_remove_initiators", 00:05:59.114 "iscsi_initiator_group_add_initiators", 00:05:59.114 "iscsi_create_initiator_group", 00:05:59.114 "iscsi_get_initiator_groups", 00:05:59.114 "nvmf_set_crdt", 00:05:59.114 "nvmf_set_config", 00:05:59.114 "nvmf_set_max_subsystems", 00:05:59.114 "nvmf_stop_mdns_prr", 00:05:59.114 "nvmf_publish_mdns_prr", 00:05:59.114 "nvmf_subsystem_get_listeners", 00:05:59.114 "nvmf_subsystem_get_qpairs", 00:05:59.114 "nvmf_subsystem_get_controllers", 00:05:59.114 "nvmf_get_stats", 00:05:59.114 "nvmf_get_transports", 00:05:59.114 "nvmf_create_transport", 00:05:59.114 "nvmf_get_targets", 00:05:59.114 "nvmf_delete_target", 00:05:59.114 "nvmf_create_target", 00:05:59.114 "nvmf_subsystem_allow_any_host", 00:05:59.115 "nvmf_subsystem_set_keys", 00:05:59.115 "nvmf_subsystem_remove_host", 00:05:59.115 "nvmf_subsystem_add_host", 00:05:59.115 "nvmf_ns_remove_host", 00:05:59.115 "nvmf_ns_add_host", 00:05:59.115 "nvmf_subsystem_remove_ns", 00:05:59.115 "nvmf_subsystem_set_ns_ana_group", 00:05:59.115 "nvmf_subsystem_add_ns", 00:05:59.115 "nvmf_subsystem_listener_set_ana_state", 00:05:59.115 "nvmf_discovery_get_referrals", 00:05:59.115 "nvmf_discovery_remove_referral", 00:05:59.115 "nvmf_discovery_add_referral", 00:05:59.115 "nvmf_subsystem_remove_listener", 00:05:59.115 "nvmf_subsystem_add_listener", 00:05:59.115 "nvmf_delete_subsystem", 00:05:59.115 "nvmf_create_subsystem", 00:05:59.115 "nvmf_get_subsystems", 00:05:59.115 "env_dpdk_get_mem_stats", 00:05:59.115 "nbd_get_disks", 00:05:59.115 "nbd_stop_disk", 00:05:59.115 "nbd_start_disk", 00:05:59.115 "ublk_recover_disk", 00:05:59.115 "ublk_get_disks", 00:05:59.115 "ublk_stop_disk", 00:05:59.115 "ublk_start_disk", 00:05:59.115 "ublk_destroy_target", 00:05:59.115 "ublk_create_target", 00:05:59.115 "virtio_blk_create_transport", 00:05:59.115 "virtio_blk_get_transports", 00:05:59.115 "vhost_controller_set_coalescing", 00:05:59.115 "vhost_get_controllers", 00:05:59.115 "vhost_delete_controller", 00:05:59.115 "vhost_create_blk_controller", 00:05:59.115 "vhost_scsi_controller_remove_target", 00:05:59.115 "vhost_scsi_controller_add_target", 00:05:59.115 "vhost_start_scsi_controller", 00:05:59.115 "vhost_create_scsi_controller", 00:05:59.115 "thread_set_cpumask", 00:05:59.115 "scheduler_set_options", 00:05:59.115 "framework_get_governor", 00:05:59.115 "framework_get_scheduler", 00:05:59.115 "framework_set_scheduler", 00:05:59.115 "framework_get_reactors", 00:05:59.115 "thread_get_io_channels", 00:05:59.115 "thread_get_pollers", 00:05:59.115 "thread_get_stats", 00:05:59.115 "framework_monitor_context_switch", 00:05:59.115 "spdk_kill_instance", 00:05:59.115 "log_enable_timestamps", 00:05:59.115 "log_get_flags", 00:05:59.115 "log_clear_flag", 00:05:59.115 "log_set_flag", 00:05:59.115 "log_get_level", 00:05:59.115 "log_set_level", 00:05:59.115 "log_get_print_level", 00:05:59.115 "log_set_print_level", 00:05:59.115 "framework_enable_cpumask_locks", 00:05:59.115 "framework_disable_cpumask_locks", 00:05:59.115 "framework_wait_init", 00:05:59.115 "framework_start_init", 00:05:59.115 "scsi_get_devices", 00:05:59.115 "bdev_get_histogram", 00:05:59.115 "bdev_enable_histogram", 00:05:59.115 "bdev_set_qos_limit", 00:05:59.115 "bdev_set_qd_sampling_period", 00:05:59.115 "bdev_get_bdevs", 00:05:59.115 "bdev_reset_iostat", 00:05:59.115 "bdev_get_iostat", 00:05:59.115 "bdev_examine", 00:05:59.115 "bdev_wait_for_examine", 00:05:59.115 "bdev_set_options", 00:05:59.115 "accel_get_stats", 00:05:59.115 "accel_set_options", 00:05:59.115 "accel_set_driver", 00:05:59.115 "accel_crypto_key_destroy", 00:05:59.115 "accel_crypto_keys_get", 00:05:59.115 "accel_crypto_key_create", 00:05:59.115 "accel_assign_opc", 00:05:59.115 "accel_get_module_info", 00:05:59.115 "accel_get_opc_assignments", 00:05:59.115 "vmd_rescan", 00:05:59.115 "vmd_remove_device", 00:05:59.115 "vmd_enable", 00:05:59.115 "sock_get_default_impl", 00:05:59.115 "sock_set_default_impl", 00:05:59.115 "sock_impl_set_options", 00:05:59.115 "sock_impl_get_options", 00:05:59.115 "iobuf_get_stats", 00:05:59.115 "iobuf_set_options", 00:05:59.115 "keyring_get_keys", 00:05:59.115 "framework_get_pci_devices", 00:05:59.115 "framework_get_config", 00:05:59.115 "framework_get_subsystems", 00:05:59.115 "fsdev_set_opts", 00:05:59.115 "fsdev_get_opts", 00:05:59.115 "trace_get_info", 00:05:59.115 "trace_get_tpoint_group_mask", 00:05:59.115 "trace_disable_tpoint_group", 00:05:59.115 "trace_enable_tpoint_group", 00:05:59.115 "trace_clear_tpoint_mask", 00:05:59.115 "trace_set_tpoint_mask", 00:05:59.115 "notify_get_notifications", 00:05:59.115 "notify_get_types", 00:05:59.115 "spdk_get_version", 00:05:59.115 "rpc_get_methods" 00:05:59.115 ] 00:05:59.376 04:22:55 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:59.376 04:22:55 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:59.376 04:22:55 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:59.376 04:22:55 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:59.376 04:22:55 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 58022 00:05:59.376 04:22:55 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 58022 ']' 00:05:59.376 04:22:55 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 58022 00:05:59.376 04:22:55 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:05:59.376 04:22:55 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:59.376 04:22:55 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58022 00:05:59.376 04:22:55 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:59.376 04:22:55 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:59.376 killing process with pid 58022 00:05:59.376 04:22:55 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58022' 00:05:59.376 04:22:55 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 58022 00:05:59.376 04:22:55 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 58022 00:06:00.760 00:06:00.760 real 0m2.428s 00:06:00.760 user 0m4.332s 00:06:00.760 sys 0m0.404s 00:06:00.760 04:22:56 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:00.760 ************************************ 00:06:00.760 END TEST spdkcli_tcp 00:06:00.760 ************************************ 00:06:00.760 04:22:56 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:00.760 04:22:56 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:00.760 04:22:56 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:00.760 04:22:56 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:00.760 04:22:56 -- common/autotest_common.sh@10 -- # set +x 00:06:00.760 ************************************ 00:06:00.760 START TEST dpdk_mem_utility 00:06:00.760 ************************************ 00:06:00.760 04:22:57 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:00.760 * Looking for test storage... 00:06:00.760 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:06:00.760 04:22:57 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:00.760 04:22:57 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lcov --version 00:06:00.760 04:22:57 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:00.760 04:22:57 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:00.760 04:22:57 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:00.760 04:22:57 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:00.760 04:22:57 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:00.760 04:22:57 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:06:00.760 04:22:57 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:06:00.760 04:22:57 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:06:00.760 04:22:57 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:06:00.760 04:22:57 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:06:00.760 04:22:57 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:06:00.760 04:22:57 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:06:00.760 04:22:57 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:00.760 04:22:57 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:06:00.760 04:22:57 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:06:00.760 04:22:57 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:00.760 04:22:57 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:00.760 04:22:57 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:06:00.760 04:22:57 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:06:00.760 04:22:57 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:00.760 04:22:57 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:06:00.760 04:22:57 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:06:00.760 04:22:57 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:06:00.760 04:22:57 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:06:00.760 04:22:57 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:00.760 04:22:57 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:06:00.760 04:22:57 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:06:00.760 04:22:57 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:00.760 04:22:57 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:00.760 04:22:57 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:06:00.760 04:22:57 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:00.760 04:22:57 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:00.760 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:00.760 --rc genhtml_branch_coverage=1 00:06:00.760 --rc genhtml_function_coverage=1 00:06:00.760 --rc genhtml_legend=1 00:06:00.760 --rc geninfo_all_blocks=1 00:06:00.760 --rc geninfo_unexecuted_blocks=1 00:06:00.760 00:06:00.760 ' 00:06:00.760 04:22:57 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:00.760 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:00.760 --rc genhtml_branch_coverage=1 00:06:00.760 --rc genhtml_function_coverage=1 00:06:00.760 --rc genhtml_legend=1 00:06:00.760 --rc geninfo_all_blocks=1 00:06:00.760 --rc geninfo_unexecuted_blocks=1 00:06:00.760 00:06:00.760 ' 00:06:00.760 04:22:57 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:00.760 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:00.760 --rc genhtml_branch_coverage=1 00:06:00.760 --rc genhtml_function_coverage=1 00:06:00.760 --rc genhtml_legend=1 00:06:00.760 --rc geninfo_all_blocks=1 00:06:00.760 --rc geninfo_unexecuted_blocks=1 00:06:00.760 00:06:00.760 ' 00:06:00.760 04:22:57 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:00.760 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:00.760 --rc genhtml_branch_coverage=1 00:06:00.760 --rc genhtml_function_coverage=1 00:06:00.760 --rc genhtml_legend=1 00:06:00.760 --rc geninfo_all_blocks=1 00:06:00.760 --rc geninfo_unexecuted_blocks=1 00:06:00.760 00:06:00.760 ' 00:06:00.760 04:22:57 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:06:00.760 04:22:57 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=58127 00:06:00.760 04:22:57 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 58127 00:06:00.760 04:22:57 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:00.760 04:22:57 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 58127 ']' 00:06:00.760 04:22:57 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:00.760 04:22:57 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:00.760 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:00.760 04:22:57 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:00.760 04:22:57 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:00.760 04:22:57 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:00.760 [2024-11-27 04:22:57.195832] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:06:00.760 [2024-11-27 04:22:57.195955] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58127 ] 00:06:01.020 [2024-11-27 04:22:57.351270] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:01.020 [2024-11-27 04:22:57.432644] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.591 04:22:58 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:01.591 04:22:58 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:06:01.591 04:22:58 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:01.591 04:22:58 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:01.591 04:22:58 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:01.591 04:22:58 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:01.591 { 00:06:01.592 "filename": "/tmp/spdk_mem_dump.txt" 00:06:01.592 } 00:06:01.592 04:22:58 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:01.592 04:22:58 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:06:01.592 DPDK memory size 824.000000 MiB in 1 heap(s) 00:06:01.592 1 heaps totaling size 824.000000 MiB 00:06:01.592 size: 824.000000 MiB heap id: 0 00:06:01.592 end heaps---------- 00:06:01.592 9 mempools totaling size 603.782043 MiB 00:06:01.592 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:01.592 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:01.592 size: 100.555481 MiB name: bdev_io_58127 00:06:01.592 size: 50.003479 MiB name: msgpool_58127 00:06:01.592 size: 36.509338 MiB name: fsdev_io_58127 00:06:01.592 size: 21.763794 MiB name: PDU_Pool 00:06:01.592 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:01.592 size: 4.133484 MiB name: evtpool_58127 00:06:01.592 size: 0.026123 MiB name: Session_Pool 00:06:01.592 end mempools------- 00:06:01.592 6 memzones totaling size 4.142822 MiB 00:06:01.592 size: 1.000366 MiB name: RG_ring_0_58127 00:06:01.592 size: 1.000366 MiB name: RG_ring_1_58127 00:06:01.592 size: 1.000366 MiB name: RG_ring_4_58127 00:06:01.592 size: 1.000366 MiB name: RG_ring_5_58127 00:06:01.592 size: 0.125366 MiB name: RG_ring_2_58127 00:06:01.592 size: 0.015991 MiB name: RG_ring_3_58127 00:06:01.592 end memzones------- 00:06:01.592 04:22:58 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:06:01.592 heap id: 0 total size: 824.000000 MiB number of busy elements: 325 number of free elements: 18 00:06:01.592 list of free elements. size: 16.778931 MiB 00:06:01.592 element at address: 0x200006400000 with size: 1.995972 MiB 00:06:01.592 element at address: 0x20000a600000 with size: 1.995972 MiB 00:06:01.592 element at address: 0x200003e00000 with size: 1.991028 MiB 00:06:01.592 element at address: 0x200019500040 with size: 0.999939 MiB 00:06:01.592 element at address: 0x200019900040 with size: 0.999939 MiB 00:06:01.592 element at address: 0x200019a00000 with size: 0.999084 MiB 00:06:01.592 element at address: 0x200032600000 with size: 0.994324 MiB 00:06:01.592 element at address: 0x200000400000 with size: 0.992004 MiB 00:06:01.592 element at address: 0x200019200000 with size: 0.959656 MiB 00:06:01.592 element at address: 0x200019d00040 with size: 0.936401 MiB 00:06:01.592 element at address: 0x200000200000 with size: 0.716980 MiB 00:06:01.592 element at address: 0x20001b400000 with size: 0.559753 MiB 00:06:01.592 element at address: 0x200000c00000 with size: 0.489441 MiB 00:06:01.592 element at address: 0x200019600000 with size: 0.487976 MiB 00:06:01.592 element at address: 0x200019e00000 with size: 0.485413 MiB 00:06:01.592 element at address: 0x200012c00000 with size: 0.433472 MiB 00:06:01.592 element at address: 0x200028800000 with size: 0.390686 MiB 00:06:01.592 element at address: 0x200000800000 with size: 0.350891 MiB 00:06:01.592 list of standard malloc elements. size: 199.290161 MiB 00:06:01.592 element at address: 0x20000a7fef80 with size: 132.000183 MiB 00:06:01.592 element at address: 0x2000065fef80 with size: 64.000183 MiB 00:06:01.592 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:06:01.592 element at address: 0x2000197fff80 with size: 1.000183 MiB 00:06:01.592 element at address: 0x200019bfff80 with size: 1.000183 MiB 00:06:01.592 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:06:01.592 element at address: 0x200019deff40 with size: 0.062683 MiB 00:06:01.592 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:06:01.592 element at address: 0x20000a5ff040 with size: 0.000427 MiB 00:06:01.592 element at address: 0x200019defdc0 with size: 0.000366 MiB 00:06:01.592 element at address: 0x200012bff040 with size: 0.000305 MiB 00:06:01.592 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:06:01.592 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:06:01.592 element at address: 0x2000004fdf40 with size: 0.000244 MiB 00:06:01.592 element at address: 0x2000004fe040 with size: 0.000244 MiB 00:06:01.592 element at address: 0x2000004fe140 with size: 0.000244 MiB 00:06:01.592 element at address: 0x2000004fe240 with size: 0.000244 MiB 00:06:01.592 element at address: 0x2000004fe340 with size: 0.000244 MiB 00:06:01.592 element at address: 0x2000004fe440 with size: 0.000244 MiB 00:06:01.592 element at address: 0x2000004fe540 with size: 0.000244 MiB 00:06:01.592 element at address: 0x2000004fe640 with size: 0.000244 MiB 00:06:01.592 element at address: 0x2000004fe740 with size: 0.000244 MiB 00:06:01.592 element at address: 0x2000004fe840 with size: 0.000244 MiB 00:06:01.592 element at address: 0x2000004fe940 with size: 0.000244 MiB 00:06:01.592 element at address: 0x2000004fea40 with size: 0.000244 MiB 00:06:01.592 element at address: 0x2000004feb40 with size: 0.000244 MiB 00:06:01.592 element at address: 0x2000004fec40 with size: 0.000244 MiB 00:06:01.592 element at address: 0x2000004fed40 with size: 0.000244 MiB 00:06:01.592 element at address: 0x2000004fee40 with size: 0.000244 MiB 00:06:01.592 element at address: 0x2000004fef40 with size: 0.000244 MiB 00:06:01.592 element at address: 0x2000004ff040 with size: 0.000244 MiB 00:06:01.592 element at address: 0x2000004ff140 with size: 0.000244 MiB 00:06:01.592 element at address: 0x2000004ff240 with size: 0.000244 MiB 00:06:01.592 element at address: 0x2000004ff340 with size: 0.000244 MiB 00:06:01.592 element at address: 0x2000004ff440 with size: 0.000244 MiB 00:06:01.592 element at address: 0x2000004ff540 with size: 0.000244 MiB 00:06:01.592 element at address: 0x2000004ff640 with size: 0.000244 MiB 00:06:01.592 element at address: 0x2000004ff740 with size: 0.000244 MiB 00:06:01.592 element at address: 0x2000004ff840 with size: 0.000244 MiB 00:06:01.592 element at address: 0x2000004ff940 with size: 0.000244 MiB 00:06:01.592 element at address: 0x2000004ffbc0 with size: 0.000244 MiB 00:06:01.592 element at address: 0x2000004ffcc0 with size: 0.000244 MiB 00:06:01.592 element at address: 0x2000004ffdc0 with size: 0.000244 MiB 00:06:01.592 element at address: 0x20000087e1c0 with size: 0.000244 MiB 00:06:01.592 element at address: 0x20000087e2c0 with size: 0.000244 MiB 00:06:01.592 element at address: 0x20000087e3c0 with size: 0.000244 MiB 00:06:01.592 element at address: 0x20000087e4c0 with size: 0.000244 MiB 00:06:01.592 element at address: 0x20000087e5c0 with size: 0.000244 MiB 00:06:01.592 element at address: 0x20000087e6c0 with size: 0.000244 MiB 00:06:01.592 element at address: 0x20000087e7c0 with size: 0.000244 MiB 00:06:01.592 element at address: 0x20000087e8c0 with size: 0.000244 MiB 00:06:01.592 element at address: 0x20000087e9c0 with size: 0.000244 MiB 00:06:01.592 element at address: 0x20000087eac0 with size: 0.000244 MiB 00:06:01.592 element at address: 0x20000087ebc0 with size: 0.000244 MiB 00:06:01.592 element at address: 0x20000087ecc0 with size: 0.000244 MiB 00:06:01.592 element at address: 0x20000087edc0 with size: 0.000244 MiB 00:06:01.592 element at address: 0x20000087eec0 with size: 0.000244 MiB 00:06:01.592 element at address: 0x20000087efc0 with size: 0.000244 MiB 00:06:01.592 element at address: 0x20000087f0c0 with size: 0.000244 MiB 00:06:01.592 element at address: 0x20000087f1c0 with size: 0.000244 MiB 00:06:01.592 element at address: 0x20000087f2c0 with size: 0.000244 MiB 00:06:01.592 element at address: 0x20000087f3c0 with size: 0.000244 MiB 00:06:01.592 element at address: 0x20000087f4c0 with size: 0.000244 MiB 00:06:01.593 element at address: 0x2000008ff800 with size: 0.000244 MiB 00:06:01.593 element at address: 0x2000008ffa80 with size: 0.000244 MiB 00:06:01.593 element at address: 0x200000c7d4c0 with size: 0.000244 MiB 00:06:01.593 element at address: 0x200000c7d5c0 with size: 0.000244 MiB 00:06:01.593 element at address: 0x200000c7d6c0 with size: 0.000244 MiB 00:06:01.593 element at address: 0x200000c7d7c0 with size: 0.000244 MiB 00:06:01.593 element at address: 0x200000c7d8c0 with size: 0.000244 MiB 00:06:01.593 element at address: 0x200000c7d9c0 with size: 0.000244 MiB 00:06:01.593 element at address: 0x200000c7dac0 with size: 0.000244 MiB 00:06:01.593 element at address: 0x200000c7dbc0 with size: 0.000244 MiB 00:06:01.593 element at address: 0x200000c7dcc0 with size: 0.000244 MiB 00:06:01.593 element at address: 0x200000c7ddc0 with size: 0.000244 MiB 00:06:01.593 element at address: 0x200000c7dec0 with size: 0.000244 MiB 00:06:01.593 element at address: 0x200000c7dfc0 with size: 0.000244 MiB 00:06:01.593 element at address: 0x200000c7e0c0 with size: 0.000244 MiB 00:06:01.593 element at address: 0x200000c7e1c0 with size: 0.000244 MiB 00:06:01.593 element at address: 0x200000c7e2c0 with size: 0.000244 MiB 00:06:01.593 element at address: 0x200000c7e3c0 with size: 0.000244 MiB 00:06:01.593 element at address: 0x200000c7e4c0 with size: 0.000244 MiB 00:06:01.593 element at address: 0x200000c7e5c0 with size: 0.000244 MiB 00:06:01.593 element at address: 0x200000c7e6c0 with size: 0.000244 MiB 00:06:01.593 element at address: 0x200000c7e7c0 with size: 0.000244 MiB 00:06:01.593 element at address: 0x200000c7e8c0 with size: 0.000244 MiB 00:06:01.593 element at address: 0x200000c7e9c0 with size: 0.000244 MiB 00:06:01.593 element at address: 0x200000c7eac0 with size: 0.000244 MiB 00:06:01.593 element at address: 0x200000c7ebc0 with size: 0.000244 MiB 00:06:01.593 element at address: 0x200000cfef00 with size: 0.000244 MiB 00:06:01.593 element at address: 0x200000cff000 with size: 0.000244 MiB 00:06:01.593 element at address: 0x20000a5ff200 with size: 0.000244 MiB 00:06:01.593 element at address: 0x20000a5ff300 with size: 0.000244 MiB 00:06:01.593 element at address: 0x20000a5ff400 with size: 0.000244 MiB 00:06:01.593 element at address: 0x20000a5ff500 with size: 0.000244 MiB 00:06:01.593 element at address: 0x20000a5ff600 with size: 0.000244 MiB 00:06:01.593 element at address: 0x20000a5ff700 with size: 0.000244 MiB 00:06:01.593 element at address: 0x20000a5ff800 with size: 0.000244 MiB 00:06:01.593 element at address: 0x20000a5ff900 with size: 0.000244 MiB 00:06:01.593 element at address: 0x20000a5ffa00 with size: 0.000244 MiB 00:06:01.593 element at address: 0x20000a5ffb00 with size: 0.000244 MiB 00:06:01.593 element at address: 0x20000a5ffc00 with size: 0.000244 MiB 00:06:01.593 element at address: 0x20000a5ffd00 with size: 0.000244 MiB 00:06:01.593 element at address: 0x20000a5ffe00 with size: 0.000244 MiB 00:06:01.593 element at address: 0x20000a5fff00 with size: 0.000244 MiB 00:06:01.593 element at address: 0x200012bff180 with size: 0.000244 MiB 00:06:01.593 element at address: 0x200012bff280 with size: 0.000244 MiB 00:06:01.593 element at address: 0x200012bff380 with size: 0.000244 MiB 00:06:01.593 element at address: 0x200012bff480 with size: 0.000244 MiB 00:06:01.593 element at address: 0x200012bff580 with size: 0.000244 MiB 00:06:01.593 element at address: 0x200012bff680 with size: 0.000244 MiB 00:06:01.593 element at address: 0x200012bff780 with size: 0.000244 MiB 00:06:01.593 element at address: 0x200012bff880 with size: 0.000244 MiB 00:06:01.593 element at address: 0x200012bff980 with size: 0.000244 MiB 00:06:01.593 element at address: 0x200012bffa80 with size: 0.000244 MiB 00:06:01.593 element at address: 0x200012bffb80 with size: 0.000244 MiB 00:06:01.593 element at address: 0x200012bffc80 with size: 0.000244 MiB 00:06:01.593 element at address: 0x200012bfff00 with size: 0.000244 MiB 00:06:01.593 element at address: 0x200012c6ef80 with size: 0.000244 MiB 00:06:01.593 element at address: 0x200012c6f080 with size: 0.000244 MiB 00:06:01.593 element at address: 0x200012c6f180 with size: 0.000244 MiB 00:06:01.593 element at address: 0x200012c6f280 with size: 0.000244 MiB 00:06:01.593 element at address: 0x200012c6f380 with size: 0.000244 MiB 00:06:01.593 element at address: 0x200012c6f480 with size: 0.000244 MiB 00:06:01.593 element at address: 0x200012c6f580 with size: 0.000244 MiB 00:06:01.593 element at address: 0x200012c6f680 with size: 0.000244 MiB 00:06:01.593 element at address: 0x200012c6f780 with size: 0.000244 MiB 00:06:01.593 element at address: 0x200012c6f880 with size: 0.000244 MiB 00:06:01.593 element at address: 0x200012cefbc0 with size: 0.000244 MiB 00:06:01.593 element at address: 0x2000192fdd00 with size: 0.000244 MiB 00:06:01.593 element at address: 0x20001967cec0 with size: 0.000244 MiB 00:06:01.593 element at address: 0x20001967cfc0 with size: 0.000244 MiB 00:06:01.593 element at address: 0x20001967d0c0 with size: 0.000244 MiB 00:06:01.593 element at address: 0x20001967d1c0 with size: 0.000244 MiB 00:06:01.593 element at address: 0x20001967d2c0 with size: 0.000244 MiB 00:06:01.593 element at address: 0x20001967d3c0 with size: 0.000244 MiB 00:06:01.593 element at address: 0x20001967d4c0 with size: 0.000244 MiB 00:06:01.593 element at address: 0x20001967d5c0 with size: 0.000244 MiB 00:06:01.593 element at address: 0x20001967d6c0 with size: 0.000244 MiB 00:06:01.593 element at address: 0x20001967d7c0 with size: 0.000244 MiB 00:06:01.593 element at address: 0x20001967d8c0 with size: 0.000244 MiB 00:06:01.593 element at address: 0x20001967d9c0 with size: 0.000244 MiB 00:06:01.593 element at address: 0x2000196fdd00 with size: 0.000244 MiB 00:06:01.593 element at address: 0x200019affc40 with size: 0.000244 MiB 00:06:01.593 element at address: 0x200019defbc0 with size: 0.000244 MiB 00:06:01.593 element at address: 0x200019defcc0 with size: 0.000244 MiB 00:06:01.593 element at address: 0x200019ebc680 with size: 0.000244 MiB 00:06:01.593 element at address: 0x20001b48f4c0 with size: 0.000244 MiB 00:06:01.593 element at address: 0x20001b48f5c0 with size: 0.000244 MiB 00:06:01.593 element at address: 0x20001b48f6c0 with size: 0.000244 MiB 00:06:01.593 element at address: 0x20001b48f7c0 with size: 0.000244 MiB 00:06:01.593 element at address: 0x20001b48f8c0 with size: 0.000244 MiB 00:06:01.593 element at address: 0x20001b48f9c0 with size: 0.000244 MiB 00:06:01.593 element at address: 0x20001b48fac0 with size: 0.000244 MiB 00:06:01.593 element at address: 0x20001b48fbc0 with size: 0.000244 MiB 00:06:01.593 element at address: 0x20001b48fcc0 with size: 0.000244 MiB 00:06:01.593 element at address: 0x20001b48fdc0 with size: 0.000244 MiB 00:06:01.593 element at address: 0x20001b48fec0 with size: 0.000244 MiB 00:06:01.593 element at address: 0x20001b48ffc0 with size: 0.000244 MiB 00:06:01.593 element at address: 0x20001b4900c0 with size: 0.000244 MiB 00:06:01.593 element at address: 0x20001b4901c0 with size: 0.000244 MiB 00:06:01.593 element at address: 0x20001b4902c0 with size: 0.000244 MiB 00:06:01.593 element at address: 0x20001b4903c0 with size: 0.000244 MiB 00:06:01.593 element at address: 0x20001b4904c0 with size: 0.000244 MiB 00:06:01.593 element at address: 0x20001b4905c0 with size: 0.000244 MiB 00:06:01.593 element at address: 0x20001b4906c0 with size: 0.000244 MiB 00:06:01.593 element at address: 0x20001b4907c0 with size: 0.000244 MiB 00:06:01.593 element at address: 0x20001b4908c0 with size: 0.000244 MiB 00:06:01.593 element at address: 0x20001b4909c0 with size: 0.000244 MiB 00:06:01.593 element at address: 0x20001b490ac0 with size: 0.000244 MiB 00:06:01.593 element at address: 0x20001b490bc0 with size: 0.000244 MiB 00:06:01.594 element at address: 0x20001b490cc0 with size: 0.000244 MiB 00:06:01.594 element at address: 0x20001b490dc0 with size: 0.000244 MiB 00:06:01.594 element at address: 0x20001b490ec0 with size: 0.000244 MiB 00:06:01.594 element at address: 0x20001b490fc0 with size: 0.000244 MiB 00:06:01.594 element at address: 0x20001b4910c0 with size: 0.000244 MiB 00:06:01.594 element at address: 0x20001b4911c0 with size: 0.000244 MiB 00:06:01.594 element at address: 0x20001b4912c0 with size: 0.000244 MiB 00:06:01.594 element at address: 0x20001b4913c0 with size: 0.000244 MiB 00:06:01.594 element at address: 0x20001b4914c0 with size: 0.000244 MiB 00:06:01.594 element at address: 0x20001b4915c0 with size: 0.000244 MiB 00:06:01.594 element at address: 0x20001b4916c0 with size: 0.000244 MiB 00:06:01.594 element at address: 0x20001b4917c0 with size: 0.000244 MiB 00:06:01.594 element at address: 0x20001b4918c0 with size: 0.000244 MiB 00:06:01.594 element at address: 0x20001b4919c0 with size: 0.000244 MiB 00:06:01.594 element at address: 0x20001b491ac0 with size: 0.000244 MiB 00:06:01.594 element at address: 0x20001b491bc0 with size: 0.000244 MiB 00:06:01.594 element at address: 0x20001b491cc0 with size: 0.000244 MiB 00:06:01.594 element at address: 0x20001b491dc0 with size: 0.000244 MiB 00:06:01.594 element at address: 0x20001b491ec0 with size: 0.000244 MiB 00:06:01.594 element at address: 0x20001b491fc0 with size: 0.000244 MiB 00:06:01.594 element at address: 0x20001b4920c0 with size: 0.000244 MiB 00:06:01.594 element at address: 0x20001b4921c0 with size: 0.000244 MiB 00:06:01.594 element at address: 0x20001b4922c0 with size: 0.000244 MiB 00:06:01.594 element at address: 0x20001b4923c0 with size: 0.000244 MiB 00:06:01.594 element at address: 0x20001b4924c0 with size: 0.000244 MiB 00:06:01.594 element at address: 0x20001b4925c0 with size: 0.000244 MiB 00:06:01.594 element at address: 0x20001b4926c0 with size: 0.000244 MiB 00:06:01.594 element at address: 0x20001b4927c0 with size: 0.000244 MiB 00:06:01.594 element at address: 0x20001b4928c0 with size: 0.000244 MiB 00:06:01.594 element at address: 0x20001b4929c0 with size: 0.000244 MiB 00:06:01.594 element at address: 0x20001b492ac0 with size: 0.000244 MiB 00:06:01.594 element at address: 0x20001b492bc0 with size: 0.000244 MiB 00:06:01.594 element at address: 0x20001b492cc0 with size: 0.000244 MiB 00:06:01.594 element at address: 0x20001b492dc0 with size: 0.000244 MiB 00:06:01.594 element at address: 0x20001b492ec0 with size: 0.000244 MiB 00:06:01.594 element at address: 0x20001b492fc0 with size: 0.000244 MiB 00:06:01.594 element at address: 0x20001b4930c0 with size: 0.000244 MiB 00:06:01.594 element at address: 0x20001b4931c0 with size: 0.000244 MiB 00:06:01.594 element at address: 0x20001b4932c0 with size: 0.000244 MiB 00:06:01.594 element at address: 0x20001b4933c0 with size: 0.000244 MiB 00:06:01.594 element at address: 0x20001b4934c0 with size: 0.000244 MiB 00:06:01.594 element at address: 0x20001b4935c0 with size: 0.000244 MiB 00:06:01.594 element at address: 0x20001b4936c0 with size: 0.000244 MiB 00:06:01.594 element at address: 0x20001b4937c0 with size: 0.000244 MiB 00:06:01.594 element at address: 0x20001b4938c0 with size: 0.000244 MiB 00:06:01.594 element at address: 0x20001b4939c0 with size: 0.000244 MiB 00:06:01.594 element at address: 0x20001b493ac0 with size: 0.000244 MiB 00:06:01.594 element at address: 0x20001b493bc0 with size: 0.000244 MiB 00:06:01.594 element at address: 0x20001b493cc0 with size: 0.000244 MiB 00:06:01.594 element at address: 0x20001b493dc0 with size: 0.000244 MiB 00:06:01.594 element at address: 0x20001b493ec0 with size: 0.000244 MiB 00:06:01.594 element at address: 0x20001b493fc0 with size: 0.000244 MiB 00:06:01.594 element at address: 0x20001b4940c0 with size: 0.000244 MiB 00:06:01.594 element at address: 0x20001b4941c0 with size: 0.000244 MiB 00:06:01.594 element at address: 0x20001b4942c0 with size: 0.000244 MiB 00:06:01.594 element at address: 0x20001b4943c0 with size: 0.000244 MiB 00:06:01.594 element at address: 0x20001b4944c0 with size: 0.000244 MiB 00:06:01.594 element at address: 0x20001b4945c0 with size: 0.000244 MiB 00:06:01.594 element at address: 0x20001b4946c0 with size: 0.000244 MiB 00:06:01.594 element at address: 0x20001b4947c0 with size: 0.000244 MiB 00:06:01.594 element at address: 0x20001b4948c0 with size: 0.000244 MiB 00:06:01.594 element at address: 0x20001b4949c0 with size: 0.000244 MiB 00:06:01.594 element at address: 0x20001b494ac0 with size: 0.000244 MiB 00:06:01.594 element at address: 0x20001b494bc0 with size: 0.000244 MiB 00:06:01.594 element at address: 0x20001b494cc0 with size: 0.000244 MiB 00:06:01.594 element at address: 0x20001b494dc0 with size: 0.000244 MiB 00:06:01.594 element at address: 0x20001b494ec0 with size: 0.000244 MiB 00:06:01.594 element at address: 0x20001b494fc0 with size: 0.000244 MiB 00:06:01.594 element at address: 0x20001b4950c0 with size: 0.000244 MiB 00:06:01.594 element at address: 0x20001b4951c0 with size: 0.000244 MiB 00:06:01.594 element at address: 0x20001b4952c0 with size: 0.000244 MiB 00:06:01.594 element at address: 0x20001b4953c0 with size: 0.000244 MiB 00:06:01.594 element at address: 0x200028864040 with size: 0.000244 MiB 00:06:01.594 element at address: 0x200028864140 with size: 0.000244 MiB 00:06:01.594 element at address: 0x20002886ae00 with size: 0.000244 MiB 00:06:01.594 element at address: 0x20002886b080 with size: 0.000244 MiB 00:06:01.594 element at address: 0x20002886b180 with size: 0.000244 MiB 00:06:01.594 element at address: 0x20002886b280 with size: 0.000244 MiB 00:06:01.594 element at address: 0x20002886b380 with size: 0.000244 MiB 00:06:01.594 element at address: 0x20002886b480 with size: 0.000244 MiB 00:06:01.594 element at address: 0x20002886b580 with size: 0.000244 MiB 00:06:01.594 element at address: 0x20002886b680 with size: 0.000244 MiB 00:06:01.594 element at address: 0x20002886b780 with size: 0.000244 MiB 00:06:01.594 element at address: 0x20002886b880 with size: 0.000244 MiB 00:06:01.594 element at address: 0x20002886b980 with size: 0.000244 MiB 00:06:01.594 element at address: 0x20002886ba80 with size: 0.000244 MiB 00:06:01.594 element at address: 0x20002886bb80 with size: 0.000244 MiB 00:06:01.594 element at address: 0x20002886bc80 with size: 0.000244 MiB 00:06:01.594 element at address: 0x20002886bd80 with size: 0.000244 MiB 00:06:01.594 element at address: 0x20002886be80 with size: 0.000244 MiB 00:06:01.594 element at address: 0x20002886bf80 with size: 0.000244 MiB 00:06:01.594 element at address: 0x20002886c080 with size: 0.000244 MiB 00:06:01.594 element at address: 0x20002886c180 with size: 0.000244 MiB 00:06:01.594 element at address: 0x20002886c280 with size: 0.000244 MiB 00:06:01.594 element at address: 0x20002886c380 with size: 0.000244 MiB 00:06:01.594 element at address: 0x20002886c480 with size: 0.000244 MiB 00:06:01.594 element at address: 0x20002886c580 with size: 0.000244 MiB 00:06:01.594 element at address: 0x20002886c680 with size: 0.000244 MiB 00:06:01.594 element at address: 0x20002886c780 with size: 0.000244 MiB 00:06:01.594 element at address: 0x20002886c880 with size: 0.000244 MiB 00:06:01.594 element at address: 0x20002886c980 with size: 0.000244 MiB 00:06:01.594 element at address: 0x20002886ca80 with size: 0.000244 MiB 00:06:01.594 element at address: 0x20002886cb80 with size: 0.000244 MiB 00:06:01.594 element at address: 0x20002886cc80 with size: 0.000244 MiB 00:06:01.594 element at address: 0x20002886cd80 with size: 0.000244 MiB 00:06:01.594 element at address: 0x20002886ce80 with size: 0.000244 MiB 00:06:01.594 element at address: 0x20002886cf80 with size: 0.000244 MiB 00:06:01.594 element at address: 0x20002886d080 with size: 0.000244 MiB 00:06:01.594 element at address: 0x20002886d180 with size: 0.000244 MiB 00:06:01.594 element at address: 0x20002886d280 with size: 0.000244 MiB 00:06:01.594 element at address: 0x20002886d380 with size: 0.000244 MiB 00:06:01.595 element at address: 0x20002886d480 with size: 0.000244 MiB 00:06:01.595 element at address: 0x20002886d580 with size: 0.000244 MiB 00:06:01.595 element at address: 0x20002886d680 with size: 0.000244 MiB 00:06:01.595 element at address: 0x20002886d780 with size: 0.000244 MiB 00:06:01.595 element at address: 0x20002886d880 with size: 0.000244 MiB 00:06:01.595 element at address: 0x20002886d980 with size: 0.000244 MiB 00:06:01.595 element at address: 0x20002886da80 with size: 0.000244 MiB 00:06:01.595 element at address: 0x20002886db80 with size: 0.000244 MiB 00:06:01.595 element at address: 0x20002886dc80 with size: 0.000244 MiB 00:06:01.595 element at address: 0x20002886dd80 with size: 0.000244 MiB 00:06:01.595 element at address: 0x20002886de80 with size: 0.000244 MiB 00:06:01.595 element at address: 0x20002886df80 with size: 0.000244 MiB 00:06:01.595 element at address: 0x20002886e080 with size: 0.000244 MiB 00:06:01.595 element at address: 0x20002886e180 with size: 0.000244 MiB 00:06:01.595 element at address: 0x20002886e280 with size: 0.000244 MiB 00:06:01.595 element at address: 0x20002886e380 with size: 0.000244 MiB 00:06:01.595 element at address: 0x20002886e480 with size: 0.000244 MiB 00:06:01.595 element at address: 0x20002886e580 with size: 0.000244 MiB 00:06:01.595 element at address: 0x20002886e680 with size: 0.000244 MiB 00:06:01.595 element at address: 0x20002886e780 with size: 0.000244 MiB 00:06:01.595 element at address: 0x20002886e880 with size: 0.000244 MiB 00:06:01.595 element at address: 0x20002886e980 with size: 0.000244 MiB 00:06:01.595 element at address: 0x20002886ea80 with size: 0.000244 MiB 00:06:01.595 element at address: 0x20002886eb80 with size: 0.000244 MiB 00:06:01.595 element at address: 0x20002886ec80 with size: 0.000244 MiB 00:06:01.595 element at address: 0x20002886ed80 with size: 0.000244 MiB 00:06:01.595 element at address: 0x20002886ee80 with size: 0.000244 MiB 00:06:01.595 element at address: 0x20002886ef80 with size: 0.000244 MiB 00:06:01.595 element at address: 0x20002886f080 with size: 0.000244 MiB 00:06:01.595 element at address: 0x20002886f180 with size: 0.000244 MiB 00:06:01.595 element at address: 0x20002886f280 with size: 0.000244 MiB 00:06:01.595 element at address: 0x20002886f380 with size: 0.000244 MiB 00:06:01.595 element at address: 0x20002886f480 with size: 0.000244 MiB 00:06:01.595 element at address: 0x20002886f580 with size: 0.000244 MiB 00:06:01.595 element at address: 0x20002886f680 with size: 0.000244 MiB 00:06:01.595 element at address: 0x20002886f780 with size: 0.000244 MiB 00:06:01.595 element at address: 0x20002886f880 with size: 0.000244 MiB 00:06:01.595 element at address: 0x20002886f980 with size: 0.000244 MiB 00:06:01.595 element at address: 0x20002886fa80 with size: 0.000244 MiB 00:06:01.595 element at address: 0x20002886fb80 with size: 0.000244 MiB 00:06:01.595 element at address: 0x20002886fc80 with size: 0.000244 MiB 00:06:01.595 element at address: 0x20002886fd80 with size: 0.000244 MiB 00:06:01.595 element at address: 0x20002886fe80 with size: 0.000244 MiB 00:06:01.595 list of memzone associated elements. size: 607.930908 MiB 00:06:01.595 element at address: 0x20001b4954c0 with size: 211.416809 MiB 00:06:01.595 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:01.595 element at address: 0x20002886ff80 with size: 157.562622 MiB 00:06:01.595 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:01.595 element at address: 0x200012df1e40 with size: 100.055115 MiB 00:06:01.595 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_58127_0 00:06:01.595 element at address: 0x200000dff340 with size: 48.003113 MiB 00:06:01.595 associated memzone info: size: 48.002930 MiB name: MP_msgpool_58127_0 00:06:01.595 element at address: 0x200003ffdb40 with size: 36.008972 MiB 00:06:01.595 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_58127_0 00:06:01.595 element at address: 0x200019fbe900 with size: 20.255615 MiB 00:06:01.595 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:01.595 element at address: 0x2000327feb00 with size: 18.005127 MiB 00:06:01.595 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:01.595 element at address: 0x2000004ffec0 with size: 3.000305 MiB 00:06:01.595 associated memzone info: size: 3.000122 MiB name: MP_evtpool_58127_0 00:06:01.595 element at address: 0x2000009ffdc0 with size: 2.000549 MiB 00:06:01.595 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_58127 00:06:01.595 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:06:01.595 associated memzone info: size: 1.007996 MiB name: MP_evtpool_58127 00:06:01.595 element at address: 0x2000196fde00 with size: 1.008179 MiB 00:06:01.595 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:01.595 element at address: 0x200019ebc780 with size: 1.008179 MiB 00:06:01.595 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:01.595 element at address: 0x2000192fde00 with size: 1.008179 MiB 00:06:01.595 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:01.595 element at address: 0x200012cefcc0 with size: 1.008179 MiB 00:06:01.595 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:01.595 element at address: 0x200000cff100 with size: 1.000549 MiB 00:06:01.595 associated memzone info: size: 1.000366 MiB name: RG_ring_0_58127 00:06:01.595 element at address: 0x2000008ffb80 with size: 1.000549 MiB 00:06:01.595 associated memzone info: size: 1.000366 MiB name: RG_ring_1_58127 00:06:01.595 element at address: 0x200019affd40 with size: 1.000549 MiB 00:06:01.595 associated memzone info: size: 1.000366 MiB name: RG_ring_4_58127 00:06:01.595 element at address: 0x2000326fe8c0 with size: 1.000549 MiB 00:06:01.595 associated memzone info: size: 1.000366 MiB name: RG_ring_5_58127 00:06:01.595 element at address: 0x20000087f5c0 with size: 0.500549 MiB 00:06:01.595 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_58127 00:06:01.595 element at address: 0x200000c7ecc0 with size: 0.500549 MiB 00:06:01.595 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_58127 00:06:01.595 element at address: 0x20001967dac0 with size: 0.500549 MiB 00:06:01.595 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:01.595 element at address: 0x200012c6f980 with size: 0.500549 MiB 00:06:01.595 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:01.595 element at address: 0x200019e7c440 with size: 0.250549 MiB 00:06:01.595 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:01.595 element at address: 0x2000002b78c0 with size: 0.125549 MiB 00:06:01.595 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_58127 00:06:01.595 element at address: 0x20000085df80 with size: 0.125549 MiB 00:06:01.595 associated memzone info: size: 0.125366 MiB name: RG_ring_2_58127 00:06:01.595 element at address: 0x2000192f5ac0 with size: 0.031799 MiB 00:06:01.595 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:01.595 element at address: 0x200028864240 with size: 0.023804 MiB 00:06:01.595 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:01.595 element at address: 0x200000859d40 with size: 0.016174 MiB 00:06:01.595 associated memzone info: size: 0.015991 MiB name: RG_ring_3_58127 00:06:01.595 element at address: 0x20002886a3c0 with size: 0.002502 MiB 00:06:01.595 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:01.595 element at address: 0x2000004ffa40 with size: 0.000366 MiB 00:06:01.595 associated memzone info: size: 0.000183 MiB name: MP_msgpool_58127 00:06:01.595 element at address: 0x2000008ff900 with size: 0.000366 MiB 00:06:01.595 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_58127 00:06:01.595 element at address: 0x200012bffd80 with size: 0.000366 MiB 00:06:01.596 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_58127 00:06:01.596 element at address: 0x20002886af00 with size: 0.000366 MiB 00:06:01.596 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:01.596 04:22:58 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:01.596 04:22:58 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 58127 00:06:01.596 04:22:58 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 58127 ']' 00:06:01.596 04:22:58 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 58127 00:06:01.596 04:22:58 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:06:01.596 04:22:58 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:01.596 04:22:58 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58127 00:06:01.596 04:22:58 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:01.596 killing process with pid 58127 00:06:01.596 04:22:58 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:01.596 04:22:58 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58127' 00:06:01.596 04:22:58 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 58127 00:06:01.596 04:22:58 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 58127 00:06:03.062 00:06:03.062 real 0m2.330s 00:06:03.062 user 0m2.351s 00:06:03.062 sys 0m0.387s 00:06:03.062 04:22:59 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:03.062 ************************************ 00:06:03.062 END TEST dpdk_mem_utility 00:06:03.062 04:22:59 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:03.062 ************************************ 00:06:03.062 04:22:59 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:06:03.062 04:22:59 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:03.062 04:22:59 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:03.062 04:22:59 -- common/autotest_common.sh@10 -- # set +x 00:06:03.062 ************************************ 00:06:03.062 START TEST event 00:06:03.062 ************************************ 00:06:03.062 04:22:59 event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:06:03.062 * Looking for test storage... 00:06:03.062 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:03.062 04:22:59 event -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:03.062 04:22:59 event -- common/autotest_common.sh@1693 -- # lcov --version 00:06:03.062 04:22:59 event -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:03.062 04:22:59 event -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:03.062 04:22:59 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:03.062 04:22:59 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:03.062 04:22:59 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:03.062 04:22:59 event -- scripts/common.sh@336 -- # IFS=.-: 00:06:03.062 04:22:59 event -- scripts/common.sh@336 -- # read -ra ver1 00:06:03.062 04:22:59 event -- scripts/common.sh@337 -- # IFS=.-: 00:06:03.062 04:22:59 event -- scripts/common.sh@337 -- # read -ra ver2 00:06:03.062 04:22:59 event -- scripts/common.sh@338 -- # local 'op=<' 00:06:03.062 04:22:59 event -- scripts/common.sh@340 -- # ver1_l=2 00:06:03.062 04:22:59 event -- scripts/common.sh@341 -- # ver2_l=1 00:06:03.062 04:22:59 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:03.062 04:22:59 event -- scripts/common.sh@344 -- # case "$op" in 00:06:03.062 04:22:59 event -- scripts/common.sh@345 -- # : 1 00:06:03.062 04:22:59 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:03.062 04:22:59 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:03.062 04:22:59 event -- scripts/common.sh@365 -- # decimal 1 00:06:03.062 04:22:59 event -- scripts/common.sh@353 -- # local d=1 00:06:03.062 04:22:59 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:03.062 04:22:59 event -- scripts/common.sh@355 -- # echo 1 00:06:03.062 04:22:59 event -- scripts/common.sh@365 -- # ver1[v]=1 00:06:03.062 04:22:59 event -- scripts/common.sh@366 -- # decimal 2 00:06:03.062 04:22:59 event -- scripts/common.sh@353 -- # local d=2 00:06:03.062 04:22:59 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:03.062 04:22:59 event -- scripts/common.sh@355 -- # echo 2 00:06:03.062 04:22:59 event -- scripts/common.sh@366 -- # ver2[v]=2 00:06:03.062 04:22:59 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:03.062 04:22:59 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:03.062 04:22:59 event -- scripts/common.sh@368 -- # return 0 00:06:03.062 04:22:59 event -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:03.062 04:22:59 event -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:03.062 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:03.062 --rc genhtml_branch_coverage=1 00:06:03.062 --rc genhtml_function_coverage=1 00:06:03.062 --rc genhtml_legend=1 00:06:03.062 --rc geninfo_all_blocks=1 00:06:03.062 --rc geninfo_unexecuted_blocks=1 00:06:03.062 00:06:03.062 ' 00:06:03.062 04:22:59 event -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:03.062 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:03.062 --rc genhtml_branch_coverage=1 00:06:03.062 --rc genhtml_function_coverage=1 00:06:03.062 --rc genhtml_legend=1 00:06:03.062 --rc geninfo_all_blocks=1 00:06:03.062 --rc geninfo_unexecuted_blocks=1 00:06:03.062 00:06:03.062 ' 00:06:03.062 04:22:59 event -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:03.062 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:03.062 --rc genhtml_branch_coverage=1 00:06:03.062 --rc genhtml_function_coverage=1 00:06:03.062 --rc genhtml_legend=1 00:06:03.062 --rc geninfo_all_blocks=1 00:06:03.062 --rc geninfo_unexecuted_blocks=1 00:06:03.062 00:06:03.062 ' 00:06:03.062 04:22:59 event -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:03.062 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:03.062 --rc genhtml_branch_coverage=1 00:06:03.062 --rc genhtml_function_coverage=1 00:06:03.062 --rc genhtml_legend=1 00:06:03.062 --rc geninfo_all_blocks=1 00:06:03.062 --rc geninfo_unexecuted_blocks=1 00:06:03.062 00:06:03.062 ' 00:06:03.062 04:22:59 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:03.062 04:22:59 event -- bdev/nbd_common.sh@6 -- # set -e 00:06:03.062 04:22:59 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:03.062 04:22:59 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:06:03.062 04:22:59 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:03.062 04:22:59 event -- common/autotest_common.sh@10 -- # set +x 00:06:03.062 ************************************ 00:06:03.062 START TEST event_perf 00:06:03.062 ************************************ 00:06:03.062 04:22:59 event.event_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:03.062 Running I/O for 1 seconds...[2024-11-27 04:22:59.556856] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:06:03.062 [2024-11-27 04:22:59.556962] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58219 ] 00:06:03.322 [2024-11-27 04:22:59.720404] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:03.322 [2024-11-27 04:22:59.846793] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:03.322 [2024-11-27 04:22:59.846989] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:03.322 Running I/O for 1 seconds...[2024-11-27 04:22:59.847499] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:03.322 [2024-11-27 04:22:59.847637] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.702 00:06:04.702 lcore 0: 138587 00:06:04.702 lcore 1: 138588 00:06:04.702 lcore 2: 138587 00:06:04.702 lcore 3: 138588 00:06:04.702 done. 00:06:04.702 00:06:04.702 real 0m1.502s 00:06:04.702 user 0m4.280s 00:06:04.702 sys 0m0.098s 00:06:04.702 ************************************ 00:06:04.702 END TEST event_perf 00:06:04.702 ************************************ 00:06:04.702 04:23:01 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:04.702 04:23:01 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:06:04.702 04:23:01 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:06:04.702 04:23:01 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:06:04.702 04:23:01 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:04.702 04:23:01 event -- common/autotest_common.sh@10 -- # set +x 00:06:04.702 ************************************ 00:06:04.702 START TEST event_reactor 00:06:04.702 ************************************ 00:06:04.702 04:23:01 event.event_reactor -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:06:04.702 [2024-11-27 04:23:01.133226] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:06:04.702 [2024-11-27 04:23:01.133375] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58253 ] 00:06:04.962 [2024-11-27 04:23:01.291237] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:04.962 [2024-11-27 04:23:01.410296] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.334 test_start 00:06:06.334 oneshot 00:06:06.334 tick 100 00:06:06.334 tick 100 00:06:06.334 tick 250 00:06:06.334 tick 100 00:06:06.334 tick 100 00:06:06.334 tick 100 00:06:06.334 tick 250 00:06:06.334 tick 500 00:06:06.334 tick 100 00:06:06.334 tick 100 00:06:06.334 tick 250 00:06:06.334 tick 100 00:06:06.334 tick 100 00:06:06.334 test_end 00:06:06.334 00:06:06.334 real 0m1.461s 00:06:06.334 user 0m1.275s 00:06:06.334 sys 0m0.077s 00:06:06.334 04:23:02 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:06.334 04:23:02 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:06:06.334 ************************************ 00:06:06.334 END TEST event_reactor 00:06:06.334 ************************************ 00:06:06.334 04:23:02 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:06.334 04:23:02 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:06:06.334 04:23:02 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:06.334 04:23:02 event -- common/autotest_common.sh@10 -- # set +x 00:06:06.334 ************************************ 00:06:06.334 START TEST event_reactor_perf 00:06:06.334 ************************************ 00:06:06.334 04:23:02 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:06.334 [2024-11-27 04:23:02.653318] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:06:06.334 [2024-11-27 04:23:02.653823] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58295 ] 00:06:06.334 [2024-11-27 04:23:02.815123] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:06.334 [2024-11-27 04:23:02.912092] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.705 test_start 00:06:07.705 test_end 00:06:07.705 Performance: 318569 events per second 00:06:07.705 00:06:07.705 real 0m1.445s 00:06:07.705 user 0m1.270s 00:06:07.705 sys 0m0.066s 00:06:07.705 ************************************ 00:06:07.705 END TEST event_reactor_perf 00:06:07.705 ************************************ 00:06:07.705 04:23:04 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:07.705 04:23:04 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:06:07.705 04:23:04 event -- event/event.sh@49 -- # uname -s 00:06:07.705 04:23:04 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:07.705 04:23:04 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:07.705 04:23:04 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:07.705 04:23:04 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:07.705 04:23:04 event -- common/autotest_common.sh@10 -- # set +x 00:06:07.705 ************************************ 00:06:07.705 START TEST event_scheduler 00:06:07.705 ************************************ 00:06:07.705 04:23:04 event.event_scheduler -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:07.705 * Looking for test storage... 00:06:07.705 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:06:07.705 04:23:04 event.event_scheduler -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:07.705 04:23:04 event.event_scheduler -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:07.705 04:23:04 event.event_scheduler -- common/autotest_common.sh@1693 -- # lcov --version 00:06:07.705 04:23:04 event.event_scheduler -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:07.705 04:23:04 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:07.705 04:23:04 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:07.705 04:23:04 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:07.705 04:23:04 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:06:07.705 04:23:04 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:06:07.705 04:23:04 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:06:07.705 04:23:04 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:06:07.705 04:23:04 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:06:07.705 04:23:04 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:06:07.705 04:23:04 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:06:07.705 04:23:04 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:07.705 04:23:04 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:06:07.705 04:23:04 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:06:07.705 04:23:04 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:07.705 04:23:04 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:07.705 04:23:04 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:06:07.705 04:23:04 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:06:07.705 04:23:04 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:07.705 04:23:04 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:06:07.705 04:23:04 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:06:07.705 04:23:04 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:06:07.705 04:23:04 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:06:07.705 04:23:04 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:07.705 04:23:04 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:06:07.705 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:07.705 04:23:04 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:06:07.705 04:23:04 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:07.705 04:23:04 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:07.705 04:23:04 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:06:07.705 04:23:04 event.event_scheduler -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:07.705 04:23:04 event.event_scheduler -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:07.705 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:07.705 --rc genhtml_branch_coverage=1 00:06:07.705 --rc genhtml_function_coverage=1 00:06:07.705 --rc genhtml_legend=1 00:06:07.705 --rc geninfo_all_blocks=1 00:06:07.705 --rc geninfo_unexecuted_blocks=1 00:06:07.705 00:06:07.705 ' 00:06:07.705 04:23:04 event.event_scheduler -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:07.705 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:07.705 --rc genhtml_branch_coverage=1 00:06:07.705 --rc genhtml_function_coverage=1 00:06:07.705 --rc genhtml_legend=1 00:06:07.705 --rc geninfo_all_blocks=1 00:06:07.705 --rc geninfo_unexecuted_blocks=1 00:06:07.705 00:06:07.705 ' 00:06:07.705 04:23:04 event.event_scheduler -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:07.705 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:07.705 --rc genhtml_branch_coverage=1 00:06:07.705 --rc genhtml_function_coverage=1 00:06:07.705 --rc genhtml_legend=1 00:06:07.705 --rc geninfo_all_blocks=1 00:06:07.705 --rc geninfo_unexecuted_blocks=1 00:06:07.705 00:06:07.705 ' 00:06:07.705 04:23:04 event.event_scheduler -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:07.705 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:07.705 --rc genhtml_branch_coverage=1 00:06:07.705 --rc genhtml_function_coverage=1 00:06:07.705 --rc genhtml_legend=1 00:06:07.705 --rc geninfo_all_blocks=1 00:06:07.705 --rc geninfo_unexecuted_blocks=1 00:06:07.705 00:06:07.705 ' 00:06:07.705 04:23:04 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:07.705 04:23:04 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=58364 00:06:07.705 04:23:04 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:07.705 04:23:04 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 58364 00:06:07.705 04:23:04 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 58364 ']' 00:06:07.705 04:23:04 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:07.706 04:23:04 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:07.706 04:23:04 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:07.706 04:23:04 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:07.706 04:23:04 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:07.706 04:23:04 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:07.962 [2024-11-27 04:23:04.336846] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:06:07.962 [2024-11-27 04:23:04.336955] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58364 ] 00:06:07.962 [2024-11-27 04:23:04.497232] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:08.220 [2024-11-27 04:23:04.600809] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.220 [2024-11-27 04:23:04.601040] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:08.220 [2024-11-27 04:23:04.601352] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:08.220 [2024-11-27 04:23:04.601356] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:08.784 04:23:05 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:08.784 04:23:05 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:06:08.784 04:23:05 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:08.784 04:23:05 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:08.784 04:23:05 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:08.784 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:08.784 POWER: Cannot set governor of lcore 0 to userspace 00:06:08.784 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:08.784 POWER: Cannot set governor of lcore 0 to performance 00:06:08.784 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:08.784 POWER: Cannot set governor of lcore 0 to userspace 00:06:08.784 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:08.784 POWER: Cannot set governor of lcore 0 to userspace 00:06:08.784 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:06:08.784 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:06:08.784 POWER: Unable to set Power Management Environment for lcore 0 00:06:08.784 [2024-11-27 04:23:05.178706] dpdk_governor.c: 135:_init_core: *ERROR*: Failed to initialize on core0 00:06:08.784 [2024-11-27 04:23:05.178740] dpdk_governor.c: 196:_init: *ERROR*: Failed to initialize on core0 00:06:08.784 [2024-11-27 04:23:05.178750] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:06:08.784 [2024-11-27 04:23:05.178766] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:08.784 [2024-11-27 04:23:05.178773] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:08.784 [2024-11-27 04:23:05.178782] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:08.784 04:23:05 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:08.784 04:23:05 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:08.784 04:23:05 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:08.784 04:23:05 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:09.041 [2024-11-27 04:23:05.405140] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:09.041 04:23:05 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:09.041 04:23:05 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:09.041 04:23:05 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:09.041 04:23:05 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:09.041 04:23:05 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:09.041 ************************************ 00:06:09.041 START TEST scheduler_create_thread 00:06:09.041 ************************************ 00:06:09.041 04:23:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:06:09.041 04:23:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:09.041 04:23:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:09.041 04:23:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:09.041 2 00:06:09.041 04:23:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:09.041 04:23:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:09.041 04:23:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:09.041 04:23:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:09.041 3 00:06:09.042 04:23:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:09.042 04:23:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:09.042 04:23:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:09.042 04:23:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:09.042 4 00:06:09.042 04:23:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:09.042 04:23:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:09.042 04:23:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:09.042 04:23:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:09.042 5 00:06:09.042 04:23:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:09.042 04:23:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:09.042 04:23:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:09.042 04:23:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:09.042 6 00:06:09.042 04:23:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:09.042 04:23:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:09.042 04:23:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:09.042 04:23:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:09.042 7 00:06:09.042 04:23:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:09.042 04:23:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:09.042 04:23:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:09.042 04:23:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:09.042 8 00:06:09.042 04:23:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:09.042 04:23:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:09.042 04:23:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:09.042 04:23:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:09.042 9 00:06:09.042 04:23:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:09.042 04:23:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:09.042 04:23:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:09.042 04:23:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:09.042 10 00:06:09.042 04:23:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:09.042 04:23:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:09.042 04:23:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:09.042 04:23:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:09.042 04:23:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:09.042 04:23:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:09.042 04:23:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:09.042 04:23:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:09.042 04:23:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:09.042 04:23:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:09.042 04:23:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:09.042 04:23:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:09.042 04:23:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:10.412 04:23:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:10.412 04:23:06 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:10.412 04:23:06 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:10.412 04:23:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:10.412 04:23:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:11.783 ************************************ 00:06:11.783 END TEST scheduler_create_thread 00:06:11.783 ************************************ 00:06:11.783 04:23:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:11.783 00:06:11.783 real 0m2.616s 00:06:11.783 user 0m0.016s 00:06:11.783 sys 0m0.005s 00:06:11.783 04:23:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:11.783 04:23:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:11.783 04:23:08 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:11.783 04:23:08 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 58364 00:06:11.783 04:23:08 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 58364 ']' 00:06:11.783 04:23:08 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 58364 00:06:11.783 04:23:08 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:06:11.783 04:23:08 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:11.783 04:23:08 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58364 00:06:11.783 killing process with pid 58364 00:06:11.783 04:23:08 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:06:11.783 04:23:08 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:06:11.783 04:23:08 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58364' 00:06:11.783 04:23:08 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 58364 00:06:11.783 04:23:08 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 58364 00:06:12.068 [2024-11-27 04:23:08.519500] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:12.631 00:06:12.631 real 0m4.966s 00:06:12.631 user 0m8.732s 00:06:12.631 sys 0m0.324s 00:06:12.631 ************************************ 00:06:12.631 END TEST event_scheduler 00:06:12.631 ************************************ 00:06:12.631 04:23:09 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:12.631 04:23:09 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:12.631 04:23:09 event -- event/event.sh@51 -- # modprobe -n nbd 00:06:12.631 04:23:09 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:12.631 04:23:09 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:12.631 04:23:09 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:12.631 04:23:09 event -- common/autotest_common.sh@10 -- # set +x 00:06:12.631 ************************************ 00:06:12.631 START TEST app_repeat 00:06:12.631 ************************************ 00:06:12.631 04:23:09 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:06:12.631 04:23:09 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:12.631 04:23:09 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:12.631 04:23:09 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:06:12.631 04:23:09 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:12.631 04:23:09 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:06:12.631 04:23:09 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:06:12.631 04:23:09 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:06:12.631 Process app_repeat pid: 58466 00:06:12.631 spdk_app_start Round 0 00:06:12.631 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:12.631 04:23:09 event.app_repeat -- event/event.sh@19 -- # repeat_pid=58466 00:06:12.631 04:23:09 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:12.631 04:23:09 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 58466' 00:06:12.632 04:23:09 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:12.632 04:23:09 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:12.632 04:23:09 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:12.632 04:23:09 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58466 /var/tmp/spdk-nbd.sock 00:06:12.632 04:23:09 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58466 ']' 00:06:12.632 04:23:09 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:12.632 04:23:09 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:12.632 04:23:09 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:12.632 04:23:09 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:12.632 04:23:09 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:12.632 [2024-11-27 04:23:09.199064] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:06:12.632 [2024-11-27 04:23:09.199690] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58466 ] 00:06:12.888 [2024-11-27 04:23:09.353644] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:12.888 [2024-11-27 04:23:09.430798] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.888 [2024-11-27 04:23:09.430807] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:13.451 04:23:09 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:13.451 04:23:09 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:13.451 04:23:09 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:13.708 Malloc0 00:06:13.708 04:23:10 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:13.964 Malloc1 00:06:13.964 04:23:10 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:13.964 04:23:10 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:13.965 04:23:10 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:13.965 04:23:10 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:13.965 04:23:10 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:13.965 04:23:10 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:13.965 04:23:10 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:13.965 04:23:10 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:13.965 04:23:10 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:13.965 04:23:10 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:13.965 04:23:10 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:13.965 04:23:10 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:13.965 04:23:10 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:13.965 04:23:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:13.965 04:23:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:13.965 04:23:10 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:14.222 /dev/nbd0 00:06:14.222 04:23:10 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:14.222 04:23:10 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:14.222 04:23:10 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:14.222 04:23:10 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:14.222 04:23:10 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:14.222 04:23:10 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:14.222 04:23:10 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:14.222 04:23:10 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:14.222 04:23:10 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:14.222 04:23:10 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:14.222 04:23:10 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:14.222 1+0 records in 00:06:14.222 1+0 records out 00:06:14.222 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000156824 s, 26.1 MB/s 00:06:14.222 04:23:10 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:14.222 04:23:10 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:14.222 04:23:10 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:14.222 04:23:10 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:14.222 04:23:10 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:14.222 04:23:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:14.222 04:23:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:14.222 04:23:10 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:14.480 /dev/nbd1 00:06:14.480 04:23:10 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:14.480 04:23:10 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:14.480 04:23:10 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:14.480 04:23:10 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:14.480 04:23:10 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:14.480 04:23:10 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:14.480 04:23:10 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:14.480 04:23:10 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:14.480 04:23:10 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:14.480 04:23:10 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:14.480 04:23:10 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:14.480 1+0 records in 00:06:14.480 1+0 records out 00:06:14.480 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000173005 s, 23.7 MB/s 00:06:14.480 04:23:10 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:14.480 04:23:10 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:14.480 04:23:10 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:14.480 04:23:10 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:14.480 04:23:10 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:14.480 04:23:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:14.480 04:23:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:14.480 04:23:10 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:14.480 04:23:10 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:14.480 04:23:10 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:14.737 04:23:11 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:14.737 { 00:06:14.738 "nbd_device": "/dev/nbd0", 00:06:14.738 "bdev_name": "Malloc0" 00:06:14.738 }, 00:06:14.738 { 00:06:14.738 "nbd_device": "/dev/nbd1", 00:06:14.738 "bdev_name": "Malloc1" 00:06:14.738 } 00:06:14.738 ]' 00:06:14.738 04:23:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:14.738 { 00:06:14.738 "nbd_device": "/dev/nbd0", 00:06:14.738 "bdev_name": "Malloc0" 00:06:14.738 }, 00:06:14.738 { 00:06:14.738 "nbd_device": "/dev/nbd1", 00:06:14.738 "bdev_name": "Malloc1" 00:06:14.738 } 00:06:14.738 ]' 00:06:14.738 04:23:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:14.738 04:23:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:14.738 /dev/nbd1' 00:06:14.738 04:23:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:14.738 /dev/nbd1' 00:06:14.738 04:23:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:14.738 04:23:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:14.738 04:23:11 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:14.738 04:23:11 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:14.738 04:23:11 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:14.738 04:23:11 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:14.738 04:23:11 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:14.738 04:23:11 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:14.738 04:23:11 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:14.738 04:23:11 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:14.738 04:23:11 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:14.738 04:23:11 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:14.738 256+0 records in 00:06:14.738 256+0 records out 00:06:14.738 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0119467 s, 87.8 MB/s 00:06:14.738 04:23:11 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:14.738 04:23:11 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:14.738 256+0 records in 00:06:14.738 256+0 records out 00:06:14.738 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0156282 s, 67.1 MB/s 00:06:14.738 04:23:11 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:14.738 04:23:11 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:14.738 256+0 records in 00:06:14.738 256+0 records out 00:06:14.738 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0196114 s, 53.5 MB/s 00:06:14.738 04:23:11 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:14.738 04:23:11 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:14.738 04:23:11 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:14.738 04:23:11 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:14.738 04:23:11 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:14.738 04:23:11 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:14.738 04:23:11 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:14.738 04:23:11 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:14.738 04:23:11 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:14.738 04:23:11 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:14.738 04:23:11 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:14.738 04:23:11 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:14.738 04:23:11 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:14.738 04:23:11 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:14.738 04:23:11 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:14.738 04:23:11 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:14.738 04:23:11 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:14.738 04:23:11 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:14.738 04:23:11 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:14.996 04:23:11 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:14.996 04:23:11 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:14.996 04:23:11 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:14.996 04:23:11 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:14.996 04:23:11 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:14.996 04:23:11 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:14.996 04:23:11 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:14.996 04:23:11 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:14.996 04:23:11 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:14.996 04:23:11 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:15.253 04:23:11 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:15.253 04:23:11 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:15.253 04:23:11 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:15.253 04:23:11 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:15.253 04:23:11 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:15.253 04:23:11 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:15.253 04:23:11 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:15.253 04:23:11 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:15.253 04:23:11 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:15.253 04:23:11 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:15.253 04:23:11 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:15.253 04:23:11 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:15.253 04:23:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:15.253 04:23:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:15.253 04:23:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:15.253 04:23:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:15.253 04:23:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:15.253 04:23:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:15.253 04:23:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:15.253 04:23:11 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:15.253 04:23:11 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:15.253 04:23:11 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:15.253 04:23:11 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:15.253 04:23:11 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:15.819 04:23:12 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:16.384 [2024-11-27 04:23:12.677501] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:16.384 [2024-11-27 04:23:12.750279] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:16.384 [2024-11-27 04:23:12.750419] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.384 [2024-11-27 04:23:12.846393] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:16.384 [2024-11-27 04:23:12.846446] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:18.979 spdk_app_start Round 1 00:06:18.979 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:18.979 04:23:15 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:18.979 04:23:15 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:18.979 04:23:15 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58466 /var/tmp/spdk-nbd.sock 00:06:18.979 04:23:15 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58466 ']' 00:06:18.979 04:23:15 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:18.979 04:23:15 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:18.979 04:23:15 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:18.979 04:23:15 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:18.979 04:23:15 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:18.979 04:23:15 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:18.979 04:23:15 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:18.979 04:23:15 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:18.979 Malloc0 00:06:18.979 04:23:15 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:19.237 Malloc1 00:06:19.237 04:23:15 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:19.237 04:23:15 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:19.237 04:23:15 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:19.237 04:23:15 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:19.237 04:23:15 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:19.237 04:23:15 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:19.237 04:23:15 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:19.237 04:23:15 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:19.237 04:23:15 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:19.237 04:23:15 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:19.237 04:23:15 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:19.237 04:23:15 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:19.237 04:23:15 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:19.237 04:23:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:19.237 04:23:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:19.237 04:23:15 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:19.495 /dev/nbd0 00:06:19.495 04:23:15 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:19.495 04:23:15 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:19.495 04:23:15 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:19.495 04:23:15 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:19.495 04:23:15 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:19.495 04:23:15 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:19.495 04:23:15 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:19.495 04:23:15 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:19.495 04:23:15 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:19.495 04:23:15 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:19.495 04:23:15 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:19.495 1+0 records in 00:06:19.495 1+0 records out 00:06:19.495 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000174669 s, 23.5 MB/s 00:06:19.495 04:23:15 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:19.495 04:23:15 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:19.495 04:23:15 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:19.495 04:23:15 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:19.495 04:23:15 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:19.495 04:23:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:19.495 04:23:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:19.495 04:23:15 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:19.753 /dev/nbd1 00:06:19.753 04:23:16 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:19.753 04:23:16 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:19.753 04:23:16 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:19.753 04:23:16 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:19.753 04:23:16 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:19.753 04:23:16 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:19.753 04:23:16 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:19.753 04:23:16 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:19.753 04:23:16 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:19.753 04:23:16 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:19.753 04:23:16 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:19.753 1+0 records in 00:06:19.753 1+0 records out 00:06:19.753 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000259205 s, 15.8 MB/s 00:06:19.753 04:23:16 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:19.753 04:23:16 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:19.753 04:23:16 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:19.753 04:23:16 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:19.753 04:23:16 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:19.753 04:23:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:19.753 04:23:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:19.753 04:23:16 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:19.753 04:23:16 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:19.753 04:23:16 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:20.014 04:23:16 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:20.014 { 00:06:20.014 "nbd_device": "/dev/nbd0", 00:06:20.014 "bdev_name": "Malloc0" 00:06:20.014 }, 00:06:20.014 { 00:06:20.014 "nbd_device": "/dev/nbd1", 00:06:20.014 "bdev_name": "Malloc1" 00:06:20.014 } 00:06:20.014 ]' 00:06:20.014 04:23:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:20.014 { 00:06:20.014 "nbd_device": "/dev/nbd0", 00:06:20.014 "bdev_name": "Malloc0" 00:06:20.014 }, 00:06:20.014 { 00:06:20.014 "nbd_device": "/dev/nbd1", 00:06:20.014 "bdev_name": "Malloc1" 00:06:20.014 } 00:06:20.014 ]' 00:06:20.014 04:23:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:20.014 04:23:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:20.014 /dev/nbd1' 00:06:20.014 04:23:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:20.014 /dev/nbd1' 00:06:20.014 04:23:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:20.014 04:23:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:20.014 04:23:16 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:20.014 04:23:16 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:20.014 04:23:16 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:20.014 04:23:16 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:20.014 04:23:16 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:20.014 04:23:16 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:20.015 04:23:16 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:20.015 04:23:16 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:20.015 04:23:16 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:20.015 04:23:16 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:20.015 256+0 records in 00:06:20.015 256+0 records out 00:06:20.015 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00903857 s, 116 MB/s 00:06:20.015 04:23:16 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:20.015 04:23:16 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:20.015 256+0 records in 00:06:20.015 256+0 records out 00:06:20.015 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0181008 s, 57.9 MB/s 00:06:20.015 04:23:16 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:20.015 04:23:16 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:20.015 256+0 records in 00:06:20.015 256+0 records out 00:06:20.015 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0177314 s, 59.1 MB/s 00:06:20.015 04:23:16 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:20.015 04:23:16 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:20.015 04:23:16 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:20.015 04:23:16 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:20.015 04:23:16 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:20.015 04:23:16 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:20.015 04:23:16 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:20.015 04:23:16 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:20.015 04:23:16 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:20.015 04:23:16 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:20.015 04:23:16 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:20.015 04:23:16 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:20.015 04:23:16 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:20.015 04:23:16 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:20.015 04:23:16 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:20.015 04:23:16 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:20.015 04:23:16 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:20.015 04:23:16 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:20.015 04:23:16 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:20.274 04:23:16 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:20.274 04:23:16 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:20.274 04:23:16 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:20.274 04:23:16 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:20.274 04:23:16 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:20.274 04:23:16 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:20.274 04:23:16 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:20.274 04:23:16 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:20.274 04:23:16 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:20.274 04:23:16 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:20.532 04:23:16 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:20.532 04:23:16 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:20.532 04:23:16 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:20.532 04:23:16 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:20.532 04:23:16 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:20.532 04:23:16 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:20.532 04:23:16 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:20.532 04:23:16 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:20.532 04:23:16 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:20.532 04:23:16 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:20.532 04:23:16 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:20.791 04:23:17 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:20.791 04:23:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:20.791 04:23:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:20.791 04:23:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:20.791 04:23:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:20.791 04:23:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:20.791 04:23:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:20.791 04:23:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:20.791 04:23:17 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:20.791 04:23:17 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:20.791 04:23:17 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:20.791 04:23:17 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:20.791 04:23:17 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:21.048 04:23:17 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:21.614 [2024-11-27 04:23:18.052079] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:21.614 [2024-11-27 04:23:18.129944] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:21.614 [2024-11-27 04:23:18.130058] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.872 [2024-11-27 04:23:18.226873] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:21.873 [2024-11-27 04:23:18.226935] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:24.418 spdk_app_start Round 2 00:06:24.418 04:23:20 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:24.418 04:23:20 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:24.418 04:23:20 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58466 /var/tmp/spdk-nbd.sock 00:06:24.418 04:23:20 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58466 ']' 00:06:24.418 04:23:20 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:24.418 04:23:20 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:24.418 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:24.418 04:23:20 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:24.418 04:23:20 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:24.418 04:23:20 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:24.418 04:23:20 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:24.418 04:23:20 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:24.418 04:23:20 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:24.679 Malloc0 00:06:24.679 04:23:21 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:24.679 Malloc1 00:06:24.679 04:23:21 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:24.679 04:23:21 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:24.679 04:23:21 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:24.679 04:23:21 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:24.679 04:23:21 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:24.679 04:23:21 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:24.679 04:23:21 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:24.679 04:23:21 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:24.679 04:23:21 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:24.679 04:23:21 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:24.679 04:23:21 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:24.679 04:23:21 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:24.679 04:23:21 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:24.680 04:23:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:24.680 04:23:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:24.680 04:23:21 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:24.942 /dev/nbd0 00:06:24.942 04:23:21 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:24.942 04:23:21 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:24.942 04:23:21 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:24.942 04:23:21 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:24.942 04:23:21 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:24.942 04:23:21 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:24.942 04:23:21 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:24.942 04:23:21 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:24.942 04:23:21 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:24.942 04:23:21 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:24.942 04:23:21 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:24.942 1+0 records in 00:06:24.942 1+0 records out 00:06:24.942 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000204702 s, 20.0 MB/s 00:06:24.942 04:23:21 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:24.942 04:23:21 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:24.942 04:23:21 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:24.942 04:23:21 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:24.942 04:23:21 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:24.942 04:23:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:24.942 04:23:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:24.942 04:23:21 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:25.203 /dev/nbd1 00:06:25.203 04:23:21 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:25.203 04:23:21 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:25.203 04:23:21 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:25.203 04:23:21 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:25.203 04:23:21 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:25.203 04:23:21 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:25.203 04:23:21 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:25.203 04:23:21 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:25.203 04:23:21 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:25.203 04:23:21 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:25.203 04:23:21 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:25.203 1+0 records in 00:06:25.203 1+0 records out 00:06:25.203 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000186934 s, 21.9 MB/s 00:06:25.203 04:23:21 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:25.203 04:23:21 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:25.203 04:23:21 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:25.203 04:23:21 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:25.203 04:23:21 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:25.203 04:23:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:25.203 04:23:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:25.203 04:23:21 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:25.203 04:23:21 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:25.203 04:23:21 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:25.463 04:23:21 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:25.463 { 00:06:25.463 "nbd_device": "/dev/nbd0", 00:06:25.463 "bdev_name": "Malloc0" 00:06:25.463 }, 00:06:25.463 { 00:06:25.463 "nbd_device": "/dev/nbd1", 00:06:25.463 "bdev_name": "Malloc1" 00:06:25.463 } 00:06:25.463 ]' 00:06:25.463 04:23:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:25.463 { 00:06:25.463 "nbd_device": "/dev/nbd0", 00:06:25.463 "bdev_name": "Malloc0" 00:06:25.463 }, 00:06:25.463 { 00:06:25.463 "nbd_device": "/dev/nbd1", 00:06:25.463 "bdev_name": "Malloc1" 00:06:25.463 } 00:06:25.463 ]' 00:06:25.463 04:23:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:25.463 04:23:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:25.463 /dev/nbd1' 00:06:25.463 04:23:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:25.463 /dev/nbd1' 00:06:25.463 04:23:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:25.463 04:23:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:25.463 04:23:21 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:25.463 04:23:21 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:25.463 04:23:21 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:25.463 04:23:21 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:25.463 04:23:21 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:25.463 04:23:21 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:25.463 04:23:21 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:25.463 04:23:21 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:25.463 04:23:21 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:25.463 04:23:21 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:25.463 256+0 records in 00:06:25.463 256+0 records out 00:06:25.463 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00514005 s, 204 MB/s 00:06:25.463 04:23:21 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:25.463 04:23:21 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:25.463 256+0 records in 00:06:25.463 256+0 records out 00:06:25.463 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0148386 s, 70.7 MB/s 00:06:25.463 04:23:21 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:25.463 04:23:21 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:25.463 256+0 records in 00:06:25.463 256+0 records out 00:06:25.463 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0164754 s, 63.6 MB/s 00:06:25.463 04:23:21 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:25.463 04:23:21 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:25.463 04:23:21 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:25.463 04:23:21 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:25.463 04:23:21 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:25.463 04:23:21 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:25.463 04:23:21 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:25.463 04:23:21 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:25.463 04:23:21 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:25.464 04:23:22 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:25.464 04:23:22 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:25.464 04:23:22 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:25.464 04:23:22 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:25.464 04:23:22 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:25.464 04:23:22 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:25.464 04:23:22 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:25.464 04:23:22 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:25.464 04:23:22 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:25.464 04:23:22 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:25.724 04:23:22 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:25.725 04:23:22 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:25.725 04:23:22 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:25.725 04:23:22 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:25.725 04:23:22 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:25.725 04:23:22 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:25.725 04:23:22 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:25.725 04:23:22 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:25.725 04:23:22 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:25.725 04:23:22 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:25.986 04:23:22 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:25.986 04:23:22 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:25.986 04:23:22 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:25.986 04:23:22 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:25.986 04:23:22 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:25.986 04:23:22 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:25.986 04:23:22 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:25.986 04:23:22 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:25.986 04:23:22 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:25.986 04:23:22 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:25.986 04:23:22 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:26.249 04:23:22 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:26.249 04:23:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:26.249 04:23:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:26.249 04:23:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:26.249 04:23:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:26.249 04:23:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:26.249 04:23:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:26.249 04:23:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:26.249 04:23:22 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:26.249 04:23:22 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:26.249 04:23:22 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:26.249 04:23:22 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:26.249 04:23:22 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:26.510 04:23:22 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:27.454 [2024-11-27 04:23:23.710298] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:27.454 [2024-11-27 04:23:23.807945] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:27.454 [2024-11-27 04:23:23.808092] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.454 [2024-11-27 04:23:23.929486] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:27.454 [2024-11-27 04:23:23.929542] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:29.984 04:23:25 event.app_repeat -- event/event.sh@38 -- # waitforlisten 58466 /var/tmp/spdk-nbd.sock 00:06:29.984 04:23:25 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58466 ']' 00:06:29.984 04:23:25 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:29.984 04:23:25 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:29.984 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:29.984 04:23:25 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:29.984 04:23:25 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:29.984 04:23:25 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:29.984 04:23:26 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:29.984 04:23:26 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:29.984 04:23:26 event.app_repeat -- event/event.sh@39 -- # killprocess 58466 00:06:29.984 04:23:26 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 58466 ']' 00:06:29.984 04:23:26 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 58466 00:06:29.984 04:23:26 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:06:29.984 04:23:26 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:29.984 04:23:26 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58466 00:06:29.984 killing process with pid 58466 00:06:29.984 04:23:26 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:29.984 04:23:26 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:29.984 04:23:26 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58466' 00:06:29.984 04:23:26 event.app_repeat -- common/autotest_common.sh@973 -- # kill 58466 00:06:29.984 04:23:26 event.app_repeat -- common/autotest_common.sh@978 -- # wait 58466 00:06:30.256 spdk_app_start is called in Round 0. 00:06:30.257 Shutdown signal received, stop current app iteration 00:06:30.257 Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 reinitialization... 00:06:30.257 spdk_app_start is called in Round 1. 00:06:30.257 Shutdown signal received, stop current app iteration 00:06:30.257 Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 reinitialization... 00:06:30.257 spdk_app_start is called in Round 2. 00:06:30.257 Shutdown signal received, stop current app iteration 00:06:30.257 Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 reinitialization... 00:06:30.257 spdk_app_start is called in Round 3. 00:06:30.257 Shutdown signal received, stop current app iteration 00:06:30.257 04:23:26 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:30.257 04:23:26 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:30.257 00:06:30.257 real 0m17.562s 00:06:30.257 user 0m38.400s 00:06:30.257 sys 0m2.088s 00:06:30.257 ************************************ 00:06:30.257 END TEST app_repeat 00:06:30.257 ************************************ 00:06:30.257 04:23:26 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:30.257 04:23:26 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:30.257 04:23:26 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:30.257 04:23:26 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:30.257 04:23:26 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:30.257 04:23:26 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:30.257 04:23:26 event -- common/autotest_common.sh@10 -- # set +x 00:06:30.257 ************************************ 00:06:30.257 START TEST cpu_locks 00:06:30.257 ************************************ 00:06:30.257 04:23:26 event.cpu_locks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:30.257 * Looking for test storage... 00:06:30.554 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:30.554 04:23:26 event.cpu_locks -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:30.554 04:23:26 event.cpu_locks -- common/autotest_common.sh@1693 -- # lcov --version 00:06:30.554 04:23:26 event.cpu_locks -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:30.554 04:23:26 event.cpu_locks -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:30.554 04:23:26 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:30.554 04:23:26 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:30.554 04:23:26 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:30.554 04:23:26 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:06:30.554 04:23:26 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:06:30.554 04:23:26 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:06:30.554 04:23:26 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:06:30.554 04:23:26 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:06:30.554 04:23:26 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:06:30.554 04:23:26 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:06:30.554 04:23:26 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:30.554 04:23:26 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:06:30.554 04:23:26 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:06:30.554 04:23:26 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:30.554 04:23:26 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:30.554 04:23:26 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:06:30.554 04:23:26 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:06:30.554 04:23:26 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:30.554 04:23:26 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:06:30.554 04:23:26 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:06:30.554 04:23:26 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:06:30.554 04:23:26 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:06:30.554 04:23:26 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:30.554 04:23:26 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:06:30.554 04:23:26 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:06:30.554 04:23:26 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:30.554 04:23:26 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:30.554 04:23:26 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:06:30.554 04:23:26 event.cpu_locks -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:30.554 04:23:26 event.cpu_locks -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:30.554 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:30.554 --rc genhtml_branch_coverage=1 00:06:30.554 --rc genhtml_function_coverage=1 00:06:30.554 --rc genhtml_legend=1 00:06:30.554 --rc geninfo_all_blocks=1 00:06:30.554 --rc geninfo_unexecuted_blocks=1 00:06:30.554 00:06:30.554 ' 00:06:30.554 04:23:26 event.cpu_locks -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:30.554 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:30.554 --rc genhtml_branch_coverage=1 00:06:30.554 --rc genhtml_function_coverage=1 00:06:30.554 --rc genhtml_legend=1 00:06:30.554 --rc geninfo_all_blocks=1 00:06:30.554 --rc geninfo_unexecuted_blocks=1 00:06:30.554 00:06:30.554 ' 00:06:30.554 04:23:26 event.cpu_locks -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:30.554 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:30.554 --rc genhtml_branch_coverage=1 00:06:30.554 --rc genhtml_function_coverage=1 00:06:30.554 --rc genhtml_legend=1 00:06:30.554 --rc geninfo_all_blocks=1 00:06:30.554 --rc geninfo_unexecuted_blocks=1 00:06:30.554 00:06:30.554 ' 00:06:30.554 04:23:26 event.cpu_locks -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:30.554 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:30.554 --rc genhtml_branch_coverage=1 00:06:30.554 --rc genhtml_function_coverage=1 00:06:30.554 --rc genhtml_legend=1 00:06:30.554 --rc geninfo_all_blocks=1 00:06:30.554 --rc geninfo_unexecuted_blocks=1 00:06:30.554 00:06:30.554 ' 00:06:30.554 04:23:26 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:30.554 04:23:26 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:30.554 04:23:26 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:30.554 04:23:26 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:30.554 04:23:26 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:30.554 04:23:26 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:30.554 04:23:26 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:30.554 ************************************ 00:06:30.554 START TEST default_locks 00:06:30.554 ************************************ 00:06:30.554 04:23:26 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:06:30.554 04:23:26 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=58891 00:06:30.554 04:23:26 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 58891 00:06:30.554 04:23:26 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 58891 ']' 00:06:30.554 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:30.554 04:23:26 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:30.554 04:23:26 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:30.554 04:23:26 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:30.554 04:23:26 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:30.554 04:23:26 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:30.554 04:23:26 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:30.554 [2024-11-27 04:23:26.994017] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:06:30.555 [2024-11-27 04:23:26.994141] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58891 ] 00:06:30.812 [2024-11-27 04:23:27.149991] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:30.812 [2024-11-27 04:23:27.226332] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.377 04:23:27 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:31.377 04:23:27 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:06:31.377 04:23:27 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 58891 00:06:31.377 04:23:27 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:31.377 04:23:27 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 58891 00:06:31.635 04:23:28 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 58891 00:06:31.635 04:23:28 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 58891 ']' 00:06:31.635 04:23:28 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 58891 00:06:31.635 04:23:28 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:06:31.635 04:23:28 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:31.635 04:23:28 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58891 00:06:31.635 killing process with pid 58891 00:06:31.635 04:23:28 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:31.635 04:23:28 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:31.635 04:23:28 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58891' 00:06:31.635 04:23:28 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 58891 00:06:31.635 04:23:28 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 58891 00:06:33.010 04:23:29 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 58891 00:06:33.010 04:23:29 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:06:33.010 04:23:29 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 58891 00:06:33.010 04:23:29 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:33.010 04:23:29 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:33.010 04:23:29 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:33.010 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:33.010 ERROR: process (pid: 58891) is no longer running 00:06:33.010 04:23:29 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:33.010 04:23:29 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 58891 00:06:33.010 04:23:29 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 58891 ']' 00:06:33.010 04:23:29 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:33.010 04:23:29 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:33.010 04:23:29 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:33.010 04:23:29 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:33.010 04:23:29 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:33.010 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (58891) - No such process 00:06:33.010 04:23:29 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:33.010 04:23:29 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:06:33.010 04:23:29 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:06:33.010 04:23:29 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:33.010 04:23:29 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:33.010 ************************************ 00:06:33.010 END TEST default_locks 00:06:33.010 ************************************ 00:06:33.010 04:23:29 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:33.010 04:23:29 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:33.010 04:23:29 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:33.010 04:23:29 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:33.010 04:23:29 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:33.010 00:06:33.010 real 0m2.341s 00:06:33.010 user 0m2.359s 00:06:33.010 sys 0m0.438s 00:06:33.010 04:23:29 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:33.010 04:23:29 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:33.010 04:23:29 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:33.010 04:23:29 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:33.010 04:23:29 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:33.010 04:23:29 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:33.010 ************************************ 00:06:33.010 START TEST default_locks_via_rpc 00:06:33.010 ************************************ 00:06:33.010 04:23:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:06:33.010 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:33.010 04:23:29 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=58955 00:06:33.010 04:23:29 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 58955 00:06:33.010 04:23:29 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:33.010 04:23:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 58955 ']' 00:06:33.010 04:23:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:33.010 04:23:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:33.010 04:23:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:33.010 04:23:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:33.010 04:23:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:33.010 [2024-11-27 04:23:29.374361] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:06:33.010 [2024-11-27 04:23:29.374483] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58955 ] 00:06:33.010 [2024-11-27 04:23:29.531078] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.268 [2024-11-27 04:23:29.612458] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.834 04:23:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:33.834 04:23:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:33.834 04:23:30 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:33.834 04:23:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:33.834 04:23:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:33.834 04:23:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:33.834 04:23:30 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:33.834 04:23:30 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:33.834 04:23:30 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:33.834 04:23:30 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:33.834 04:23:30 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:33.834 04:23:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:33.834 04:23:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:33.834 04:23:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:33.834 04:23:30 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 58955 00:06:33.834 04:23:30 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 58955 00:06:33.834 04:23:30 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:34.092 04:23:30 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 58955 00:06:34.092 04:23:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 58955 ']' 00:06:34.092 04:23:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 58955 00:06:34.092 04:23:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:06:34.093 04:23:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:34.093 04:23:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58955 00:06:34.093 killing process with pid 58955 00:06:34.093 04:23:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:34.093 04:23:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:34.093 04:23:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58955' 00:06:34.093 04:23:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 58955 00:06:34.093 04:23:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 58955 00:06:35.026 00:06:35.026 real 0m2.311s 00:06:35.026 user 0m2.366s 00:06:35.026 sys 0m0.397s 00:06:35.026 04:23:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:35.285 04:23:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:35.285 ************************************ 00:06:35.285 END TEST default_locks_via_rpc 00:06:35.285 ************************************ 00:06:35.285 04:23:31 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:35.285 04:23:31 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:35.285 04:23:31 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:35.285 04:23:31 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:35.285 ************************************ 00:06:35.285 START TEST non_locking_app_on_locked_coremask 00:06:35.285 ************************************ 00:06:35.285 04:23:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:06:35.285 04:23:31 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=59007 00:06:35.285 04:23:31 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 59007 /var/tmp/spdk.sock 00:06:35.285 04:23:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59007 ']' 00:06:35.285 04:23:31 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:35.285 04:23:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:35.285 04:23:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:35.285 04:23:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:35.285 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:35.285 04:23:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:35.285 04:23:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:35.285 [2024-11-27 04:23:31.745629] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:06:35.285 [2024-11-27 04:23:31.745958] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59007 ] 00:06:35.544 [2024-11-27 04:23:31.903045] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.544 [2024-11-27 04:23:31.983554] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.111 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:36.111 04:23:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:36.111 04:23:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:36.111 04:23:32 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=59023 00:06:36.111 04:23:32 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 59023 /var/tmp/spdk2.sock 00:06:36.111 04:23:32 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:36.111 04:23:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59023 ']' 00:06:36.111 04:23:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:36.111 04:23:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:36.111 04:23:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:36.111 04:23:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:36.111 04:23:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:36.111 [2024-11-27 04:23:32.603437] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:06:36.111 [2024-11-27 04:23:32.603761] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59023 ] 00:06:36.369 [2024-11-27 04:23:32.768267] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:36.369 [2024-11-27 04:23:32.768308] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.369 [2024-11-27 04:23:32.923926] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.302 04:23:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:37.302 04:23:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:37.302 04:23:33 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 59007 00:06:37.302 04:23:33 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59007 00:06:37.302 04:23:33 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:37.560 04:23:34 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 59007 00:06:37.560 04:23:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59007 ']' 00:06:37.560 04:23:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 59007 00:06:37.560 04:23:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:37.560 04:23:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:37.560 04:23:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59007 00:06:37.818 killing process with pid 59007 00:06:37.818 04:23:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:37.818 04:23:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:37.818 04:23:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59007' 00:06:37.818 04:23:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 59007 00:06:37.818 04:23:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 59007 00:06:40.350 04:23:36 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 59023 00:06:40.350 04:23:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59023 ']' 00:06:40.350 04:23:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 59023 00:06:40.350 04:23:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:40.350 04:23:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:40.350 04:23:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59023 00:06:40.350 killing process with pid 59023 00:06:40.350 04:23:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:40.350 04:23:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:40.350 04:23:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59023' 00:06:40.350 04:23:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 59023 00:06:40.350 04:23:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 59023 00:06:41.285 ************************************ 00:06:41.285 END TEST non_locking_app_on_locked_coremask 00:06:41.285 ************************************ 00:06:41.285 00:06:41.285 real 0m6.073s 00:06:41.285 user 0m6.308s 00:06:41.285 sys 0m0.783s 00:06:41.285 04:23:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:41.285 04:23:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:41.285 04:23:37 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:41.285 04:23:37 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:41.285 04:23:37 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:41.285 04:23:37 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:41.285 ************************************ 00:06:41.285 START TEST locking_app_on_unlocked_coremask 00:06:41.285 ************************************ 00:06:41.285 04:23:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:06:41.285 04:23:37 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=59114 00:06:41.285 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:41.285 04:23:37 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 59114 /var/tmp/spdk.sock 00:06:41.285 04:23:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59114 ']' 00:06:41.285 04:23:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:41.285 04:23:37 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:41.285 04:23:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:41.285 04:23:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:41.285 04:23:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:41.285 04:23:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:41.545 [2024-11-27 04:23:37.877025] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:06:41.545 [2024-11-27 04:23:37.877321] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59114 ] 00:06:41.545 [2024-11-27 04:23:38.037073] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:41.545 [2024-11-27 04:23:38.037236] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.804 [2024-11-27 04:23:38.135242] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.370 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:42.370 04:23:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:42.370 04:23:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:42.370 04:23:38 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:42.370 04:23:38 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=59130 00:06:42.370 04:23:38 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 59130 /var/tmp/spdk2.sock 00:06:42.370 04:23:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59130 ']' 00:06:42.370 04:23:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:42.370 04:23:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:42.370 04:23:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:42.370 04:23:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:42.370 04:23:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:42.370 [2024-11-27 04:23:38.790169] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:06:42.370 [2024-11-27 04:23:38.790448] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59130 ] 00:06:42.628 [2024-11-27 04:23:38.964870] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.628 [2024-11-27 04:23:39.156008] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.006 04:23:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:44.006 04:23:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:44.006 04:23:40 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 59130 00:06:44.006 04:23:40 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59130 00:06:44.006 04:23:40 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:44.264 04:23:40 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 59114 00:06:44.264 04:23:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59114 ']' 00:06:44.264 04:23:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 59114 00:06:44.264 04:23:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:44.264 04:23:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:44.264 04:23:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59114 00:06:44.264 killing process with pid 59114 00:06:44.264 04:23:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:44.264 04:23:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:44.264 04:23:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59114' 00:06:44.264 04:23:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 59114 00:06:44.264 04:23:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 59114 00:06:46.796 04:23:43 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 59130 00:06:46.796 04:23:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59130 ']' 00:06:46.796 04:23:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 59130 00:06:46.796 04:23:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:46.796 04:23:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:46.796 04:23:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59130 00:06:46.796 killing process with pid 59130 00:06:46.796 04:23:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:46.796 04:23:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:46.796 04:23:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59130' 00:06:46.796 04:23:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 59130 00:06:46.796 04:23:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 59130 00:06:48.171 ************************************ 00:06:48.171 END TEST locking_app_on_unlocked_coremask 00:06:48.171 ************************************ 00:06:48.171 00:06:48.171 real 0m6.617s 00:06:48.171 user 0m6.863s 00:06:48.171 sys 0m0.853s 00:06:48.171 04:23:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:48.171 04:23:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:48.171 04:23:44 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:48.171 04:23:44 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:48.171 04:23:44 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:48.171 04:23:44 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:48.171 ************************************ 00:06:48.171 START TEST locking_app_on_locked_coremask 00:06:48.171 ************************************ 00:06:48.171 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:48.171 04:23:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:06:48.171 04:23:44 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=59227 00:06:48.171 04:23:44 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 59227 /var/tmp/spdk.sock 00:06:48.171 04:23:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59227 ']' 00:06:48.171 04:23:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:48.171 04:23:44 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:48.171 04:23:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:48.171 04:23:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:48.171 04:23:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:48.172 04:23:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:48.172 [2024-11-27 04:23:44.542867] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:06:48.172 [2024-11-27 04:23:44.542981] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59227 ] 00:06:48.172 [2024-11-27 04:23:44.699361] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.430 [2024-11-27 04:23:44.774438] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.996 04:23:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:48.996 04:23:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:48.996 04:23:45 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=59243 00:06:48.996 04:23:45 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:48.996 04:23:45 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 59243 /var/tmp/spdk2.sock 00:06:48.996 04:23:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:48.996 04:23:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59243 /var/tmp/spdk2.sock 00:06:48.996 04:23:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:48.996 04:23:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:48.996 04:23:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:48.996 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:48.996 04:23:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:48.996 04:23:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 59243 /var/tmp/spdk2.sock 00:06:48.996 04:23:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59243 ']' 00:06:48.996 04:23:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:48.996 04:23:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:48.996 04:23:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:48.996 04:23:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:48.996 04:23:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:48.996 [2024-11-27 04:23:45.460909] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:06:48.996 [2024-11-27 04:23:45.461210] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59243 ] 00:06:49.256 [2024-11-27 04:23:45.624169] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 59227 has claimed it. 00:06:49.256 [2024-11-27 04:23:45.624215] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:49.514 ERROR: process (pid: 59243) is no longer running 00:06:49.514 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59243) - No such process 00:06:49.514 04:23:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:49.514 04:23:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:49.514 04:23:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:49.514 04:23:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:49.514 04:23:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:49.514 04:23:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:49.514 04:23:46 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 59227 00:06:49.514 04:23:46 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59227 00:06:49.514 04:23:46 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:49.772 04:23:46 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 59227 00:06:49.772 04:23:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59227 ']' 00:06:49.772 04:23:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 59227 00:06:49.772 04:23:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:49.772 04:23:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:49.772 04:23:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59227 00:06:49.772 killing process with pid 59227 00:06:49.772 04:23:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:49.772 04:23:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:49.772 04:23:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59227' 00:06:49.772 04:23:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 59227 00:06:49.772 04:23:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 59227 00:06:51.143 ************************************ 00:06:51.143 END TEST locking_app_on_locked_coremask 00:06:51.143 ************************************ 00:06:51.143 00:06:51.143 real 0m2.949s 00:06:51.143 user 0m3.183s 00:06:51.143 sys 0m0.525s 00:06:51.143 04:23:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:51.143 04:23:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:51.143 04:23:47 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:51.143 04:23:47 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:51.143 04:23:47 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:51.143 04:23:47 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:51.143 ************************************ 00:06:51.143 START TEST locking_overlapped_coremask 00:06:51.143 ************************************ 00:06:51.143 04:23:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:06:51.143 04:23:47 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=59296 00:06:51.143 04:23:47 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 59296 /var/tmp/spdk.sock 00:06:51.144 04:23:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 59296 ']' 00:06:51.144 04:23:47 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:06:51.144 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:51.144 04:23:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:51.144 04:23:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:51.144 04:23:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:51.144 04:23:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:51.144 04:23:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:51.144 [2024-11-27 04:23:47.550052] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:06:51.144 [2024-11-27 04:23:47.550179] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59296 ] 00:06:51.144 [2024-11-27 04:23:47.706006] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:51.401 [2024-11-27 04:23:47.787162] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:51.401 [2024-11-27 04:23:47.787399] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.401 [2024-11-27 04:23:47.787402] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:51.967 04:23:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:51.967 04:23:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:51.967 04:23:48 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=59308 00:06:51.967 04:23:48 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 59308 /var/tmp/spdk2.sock 00:06:51.967 04:23:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:51.967 04:23:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59308 /var/tmp/spdk2.sock 00:06:51.967 04:23:48 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:51.967 04:23:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:51.967 04:23:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:51.967 04:23:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:51.967 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:51.967 04:23:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:51.967 04:23:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 59308 /var/tmp/spdk2.sock 00:06:51.967 04:23:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 59308 ']' 00:06:51.967 04:23:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:51.967 04:23:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:51.967 04:23:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:51.967 04:23:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:51.967 04:23:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:51.967 [2024-11-27 04:23:48.480973] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:06:51.967 [2024-11-27 04:23:48.481113] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59308 ] 00:06:52.225 [2024-11-27 04:23:48.669643] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59296 has claimed it. 00:06:52.225 [2024-11-27 04:23:48.669698] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:52.816 ERROR: process (pid: 59308) is no longer running 00:06:52.816 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59308) - No such process 00:06:52.816 04:23:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:52.816 04:23:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:52.816 04:23:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:52.816 04:23:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:52.816 04:23:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:52.816 04:23:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:52.816 04:23:49 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:52.816 04:23:49 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:52.816 04:23:49 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:52.816 04:23:49 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:52.816 04:23:49 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 59296 00:06:52.816 04:23:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 59296 ']' 00:06:52.816 04:23:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 59296 00:06:52.816 04:23:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:06:52.816 04:23:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:52.816 04:23:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59296 00:06:52.816 04:23:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:52.816 04:23:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:52.816 04:23:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59296' 00:06:52.816 killing process with pid 59296 00:06:52.816 04:23:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 59296 00:06:52.816 04:23:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 59296 00:06:53.760 00:06:53.760 real 0m2.792s 00:06:53.760 user 0m7.657s 00:06:53.760 sys 0m0.423s 00:06:53.760 04:23:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:53.760 04:23:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:53.760 ************************************ 00:06:53.760 END TEST locking_overlapped_coremask 00:06:53.760 ************************************ 00:06:53.760 04:23:50 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:53.760 04:23:50 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:53.760 04:23:50 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:53.760 04:23:50 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:53.760 ************************************ 00:06:53.760 START TEST locking_overlapped_coremask_via_rpc 00:06:53.760 ************************************ 00:06:53.760 04:23:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:06:53.760 04:23:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=59361 00:06:53.760 04:23:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 59361 /var/tmp/spdk.sock 00:06:53.760 04:23:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:53.760 04:23:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59361 ']' 00:06:53.760 04:23:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:53.760 04:23:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:53.760 04:23:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:53.760 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:53.760 04:23:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:53.760 04:23:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:54.022 [2024-11-27 04:23:50.384199] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:06:54.022 [2024-11-27 04:23:50.384818] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59361 ] 00:06:54.022 [2024-11-27 04:23:50.544558] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:54.022 [2024-11-27 04:23:50.544686] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:54.283 [2024-11-27 04:23:50.623608] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:54.283 [2024-11-27 04:23:50.623816] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:54.283 [2024-11-27 04:23:50.623987] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.856 04:23:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:54.856 04:23:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:54.856 04:23:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=59379 00:06:54.856 04:23:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:54.856 04:23:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 59379 /var/tmp/spdk2.sock 00:06:54.856 04:23:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59379 ']' 00:06:54.856 04:23:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:54.856 04:23:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:54.856 04:23:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:54.856 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:54.856 04:23:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:54.856 04:23:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:54.856 [2024-11-27 04:23:51.245951] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:06:54.856 [2024-11-27 04:23:51.246499] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59379 ] 00:06:54.856 [2024-11-27 04:23:51.408801] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:54.856 [2024-11-27 04:23:51.408841] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:55.116 [2024-11-27 04:23:51.570201] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:55.116 [2024-11-27 04:23:51.573833] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:55.116 [2024-11-27 04:23:51.573861] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:06:56.058 04:23:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:56.058 04:23:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:56.058 04:23:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:56.058 04:23:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:56.058 04:23:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:56.058 04:23:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:56.058 04:23:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:56.058 04:23:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:06:56.058 04:23:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:56.058 04:23:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:06:56.058 04:23:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:56.058 04:23:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:06:56.058 04:23:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:56.058 04:23:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:56.058 04:23:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:56.058 04:23:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:56.058 [2024-11-27 04:23:52.518895] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59361 has claimed it. 00:06:56.058 request: 00:06:56.058 { 00:06:56.058 "method": "framework_enable_cpumask_locks", 00:06:56.058 "req_id": 1 00:06:56.058 } 00:06:56.058 Got JSON-RPC error response 00:06:56.058 response: 00:06:56.058 { 00:06:56.058 "code": -32603, 00:06:56.058 "message": "Failed to claim CPU core: 2" 00:06:56.058 } 00:06:56.058 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:56.058 04:23:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:06:56.058 04:23:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:06:56.058 04:23:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:56.058 04:23:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:56.058 04:23:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:56.058 04:23:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 59361 /var/tmp/spdk.sock 00:06:56.058 04:23:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59361 ']' 00:06:56.058 04:23:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:56.058 04:23:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:56.058 04:23:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:56.058 04:23:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:56.058 04:23:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:56.319 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:56.319 04:23:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:56.319 04:23:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:56.319 04:23:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 59379 /var/tmp/spdk2.sock 00:06:56.319 04:23:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59379 ']' 00:06:56.319 04:23:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:56.319 04:23:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:56.319 04:23:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:56.319 04:23:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:56.319 04:23:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:56.578 ************************************ 00:06:56.578 END TEST locking_overlapped_coremask_via_rpc 00:06:56.578 ************************************ 00:06:56.578 04:23:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:56.578 04:23:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:56.578 04:23:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:56.578 04:23:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:56.578 04:23:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:56.578 04:23:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:56.578 00:06:56.578 real 0m2.633s 00:06:56.578 user 0m1.036s 00:06:56.578 sys 0m0.110s 00:06:56.578 04:23:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:56.578 04:23:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:56.578 04:23:52 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:56.578 04:23:52 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59361 ]] 00:06:56.578 04:23:52 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59361 00:06:56.578 04:23:52 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59361 ']' 00:06:56.578 04:23:52 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59361 00:06:56.578 04:23:52 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:56.578 04:23:52 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:56.578 04:23:52 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59361 00:06:56.578 killing process with pid 59361 00:06:56.578 04:23:52 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:56.578 04:23:52 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:56.578 04:23:52 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59361' 00:06:56.578 04:23:52 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 59361 00:06:56.578 04:23:52 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 59361 00:06:57.964 04:23:54 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59379 ]] 00:06:57.964 04:23:54 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59379 00:06:57.964 04:23:54 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59379 ']' 00:06:57.964 04:23:54 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59379 00:06:57.964 04:23:54 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:57.964 04:23:54 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:57.964 04:23:54 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59379 00:06:57.964 04:23:54 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:06:57.964 04:23:54 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:06:57.964 04:23:54 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59379' 00:06:57.964 killing process with pid 59379 00:06:57.964 04:23:54 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 59379 00:06:57.964 04:23:54 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 59379 00:06:59.350 04:23:55 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:59.350 04:23:55 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:59.350 04:23:55 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59361 ]] 00:06:59.350 04:23:55 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59361 00:06:59.350 04:23:55 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59361 ']' 00:06:59.350 04:23:55 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59361 00:06:59.350 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (59361) - No such process 00:06:59.350 Process with pid 59361 is not found 00:06:59.350 04:23:55 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 59361 is not found' 00:06:59.350 Process with pid 59379 is not found 00:06:59.350 04:23:55 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59379 ]] 00:06:59.350 04:23:55 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59379 00:06:59.350 04:23:55 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59379 ']' 00:06:59.350 04:23:55 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59379 00:06:59.350 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (59379) - No such process 00:06:59.350 04:23:55 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 59379 is not found' 00:06:59.350 04:23:55 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:59.350 ************************************ 00:06:59.350 END TEST cpu_locks 00:06:59.350 ************************************ 00:06:59.350 00:06:59.350 real 0m29.048s 00:06:59.350 user 0m49.813s 00:06:59.350 sys 0m4.282s 00:06:59.350 04:23:55 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:59.350 04:23:55 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:59.350 ************************************ 00:06:59.350 END TEST event 00:06:59.350 ************************************ 00:06:59.350 00:06:59.350 real 0m56.479s 00:06:59.350 user 1m43.930s 00:06:59.350 sys 0m7.178s 00:06:59.350 04:23:55 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:59.350 04:23:55 event -- common/autotest_common.sh@10 -- # set +x 00:06:59.350 04:23:55 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:59.350 04:23:55 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:59.350 04:23:55 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:59.350 04:23:55 -- common/autotest_common.sh@10 -- # set +x 00:06:59.350 ************************************ 00:06:59.350 START TEST thread 00:06:59.350 ************************************ 00:06:59.350 04:23:55 thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:59.612 * Looking for test storage... 00:06:59.612 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:06:59.612 04:23:55 thread -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:59.612 04:23:55 thread -- common/autotest_common.sh@1693 -- # lcov --version 00:06:59.612 04:23:55 thread -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:59.612 04:23:56 thread -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:59.612 04:23:56 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:59.612 04:23:56 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:59.612 04:23:56 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:59.612 04:23:56 thread -- scripts/common.sh@336 -- # IFS=.-: 00:06:59.612 04:23:56 thread -- scripts/common.sh@336 -- # read -ra ver1 00:06:59.612 04:23:56 thread -- scripts/common.sh@337 -- # IFS=.-: 00:06:59.612 04:23:56 thread -- scripts/common.sh@337 -- # read -ra ver2 00:06:59.612 04:23:56 thread -- scripts/common.sh@338 -- # local 'op=<' 00:06:59.612 04:23:56 thread -- scripts/common.sh@340 -- # ver1_l=2 00:06:59.612 04:23:56 thread -- scripts/common.sh@341 -- # ver2_l=1 00:06:59.612 04:23:56 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:59.612 04:23:56 thread -- scripts/common.sh@344 -- # case "$op" in 00:06:59.612 04:23:56 thread -- scripts/common.sh@345 -- # : 1 00:06:59.612 04:23:56 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:59.612 04:23:56 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:59.612 04:23:56 thread -- scripts/common.sh@365 -- # decimal 1 00:06:59.612 04:23:56 thread -- scripts/common.sh@353 -- # local d=1 00:06:59.612 04:23:56 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:59.612 04:23:56 thread -- scripts/common.sh@355 -- # echo 1 00:06:59.612 04:23:56 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:06:59.612 04:23:56 thread -- scripts/common.sh@366 -- # decimal 2 00:06:59.612 04:23:56 thread -- scripts/common.sh@353 -- # local d=2 00:06:59.612 04:23:56 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:59.612 04:23:56 thread -- scripts/common.sh@355 -- # echo 2 00:06:59.612 04:23:56 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:06:59.612 04:23:56 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:59.612 04:23:56 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:59.612 04:23:56 thread -- scripts/common.sh@368 -- # return 0 00:06:59.612 04:23:56 thread -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:59.612 04:23:56 thread -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:59.612 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:59.612 --rc genhtml_branch_coverage=1 00:06:59.612 --rc genhtml_function_coverage=1 00:06:59.612 --rc genhtml_legend=1 00:06:59.612 --rc geninfo_all_blocks=1 00:06:59.612 --rc geninfo_unexecuted_blocks=1 00:06:59.612 00:06:59.612 ' 00:06:59.612 04:23:56 thread -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:59.612 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:59.612 --rc genhtml_branch_coverage=1 00:06:59.612 --rc genhtml_function_coverage=1 00:06:59.612 --rc genhtml_legend=1 00:06:59.612 --rc geninfo_all_blocks=1 00:06:59.612 --rc geninfo_unexecuted_blocks=1 00:06:59.612 00:06:59.612 ' 00:06:59.612 04:23:56 thread -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:59.612 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:59.612 --rc genhtml_branch_coverage=1 00:06:59.612 --rc genhtml_function_coverage=1 00:06:59.612 --rc genhtml_legend=1 00:06:59.612 --rc geninfo_all_blocks=1 00:06:59.612 --rc geninfo_unexecuted_blocks=1 00:06:59.612 00:06:59.612 ' 00:06:59.612 04:23:56 thread -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:59.612 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:59.612 --rc genhtml_branch_coverage=1 00:06:59.612 --rc genhtml_function_coverage=1 00:06:59.612 --rc genhtml_legend=1 00:06:59.612 --rc geninfo_all_blocks=1 00:06:59.612 --rc geninfo_unexecuted_blocks=1 00:06:59.612 00:06:59.612 ' 00:06:59.612 04:23:56 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:59.612 04:23:56 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:59.612 04:23:56 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:59.612 04:23:56 thread -- common/autotest_common.sh@10 -- # set +x 00:06:59.612 ************************************ 00:06:59.612 START TEST thread_poller_perf 00:06:59.612 ************************************ 00:06:59.612 04:23:56 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:59.612 [2024-11-27 04:23:56.084336] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:06:59.612 [2024-11-27 04:23:56.084500] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59539 ] 00:06:59.872 [2024-11-27 04:23:56.240095] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.872 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:59.872 [2024-11-27 04:23:56.347591] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.258 [2024-11-27T04:23:57.845Z] ====================================== 00:07:01.258 [2024-11-27T04:23:57.845Z] busy:2614951162 (cyc) 00:07:01.258 [2024-11-27T04:23:57.845Z] total_run_count: 305000 00:07:01.258 [2024-11-27T04:23:57.845Z] tsc_hz: 2600000000 (cyc) 00:07:01.258 [2024-11-27T04:23:57.845Z] ====================================== 00:07:01.258 [2024-11-27T04:23:57.845Z] poller_cost: 8573 (cyc), 3297 (nsec) 00:07:01.258 ************************************ 00:07:01.258 END TEST thread_poller_perf 00:07:01.258 ************************************ 00:07:01.258 00:07:01.258 real 0m1.475s 00:07:01.258 user 0m1.281s 00:07:01.258 sys 0m0.079s 00:07:01.258 04:23:57 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:01.258 04:23:57 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:01.258 04:23:57 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:01.258 04:23:57 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:07:01.258 04:23:57 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:01.258 04:23:57 thread -- common/autotest_common.sh@10 -- # set +x 00:07:01.258 ************************************ 00:07:01.258 START TEST thread_poller_perf 00:07:01.258 ************************************ 00:07:01.258 04:23:57 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:01.258 [2024-11-27 04:23:57.618625] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:07:01.258 [2024-11-27 04:23:57.618872] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59576 ] 00:07:01.258 [2024-11-27 04:23:57.775563] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:01.520 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:07:01.520 [2024-11-27 04:23:57.899382] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.466 [2024-11-27T04:23:59.053Z] ====================================== 00:07:02.466 [2024-11-27T04:23:59.053Z] busy:2603918826 (cyc) 00:07:02.466 [2024-11-27T04:23:59.053Z] total_run_count: 4266000 00:07:02.466 [2024-11-27T04:23:59.053Z] tsc_hz: 2600000000 (cyc) 00:07:02.466 [2024-11-27T04:23:59.053Z] ====================================== 00:07:02.466 [2024-11-27T04:23:59.053Z] poller_cost: 610 (cyc), 234 (nsec) 00:07:02.466 ************************************ 00:07:02.466 END TEST thread_poller_perf 00:07:02.466 ************************************ 00:07:02.466 00:07:02.466 real 0m1.431s 00:07:02.466 user 0m1.255s 00:07:02.466 sys 0m0.067s 00:07:02.466 04:23:59 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:02.466 04:23:59 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:02.729 04:23:59 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:07:02.729 ************************************ 00:07:02.729 END TEST thread 00:07:02.729 ************************************ 00:07:02.729 00:07:02.729 real 0m3.155s 00:07:02.729 user 0m2.658s 00:07:02.729 sys 0m0.267s 00:07:02.729 04:23:59 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:02.729 04:23:59 thread -- common/autotest_common.sh@10 -- # set +x 00:07:02.729 04:23:59 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:07:02.729 04:23:59 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:02.729 04:23:59 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:02.729 04:23:59 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:02.729 04:23:59 -- common/autotest_common.sh@10 -- # set +x 00:07:02.729 ************************************ 00:07:02.729 START TEST app_cmdline 00:07:02.729 ************************************ 00:07:02.729 04:23:59 app_cmdline -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:02.729 * Looking for test storage... 00:07:02.729 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:02.729 04:23:59 app_cmdline -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:02.729 04:23:59 app_cmdline -- common/autotest_common.sh@1693 -- # lcov --version 00:07:02.729 04:23:59 app_cmdline -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:02.729 04:23:59 app_cmdline -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:02.729 04:23:59 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:02.729 04:23:59 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:02.729 04:23:59 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:02.729 04:23:59 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:07:02.729 04:23:59 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:07:02.729 04:23:59 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:07:02.729 04:23:59 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:07:02.729 04:23:59 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:07:02.729 04:23:59 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:07:02.729 04:23:59 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:07:02.729 04:23:59 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:02.729 04:23:59 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:07:02.729 04:23:59 app_cmdline -- scripts/common.sh@345 -- # : 1 00:07:02.729 04:23:59 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:02.729 04:23:59 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:02.729 04:23:59 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:07:02.729 04:23:59 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:07:02.729 04:23:59 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:02.729 04:23:59 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:07:02.729 04:23:59 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:07:02.729 04:23:59 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:07:02.729 04:23:59 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:07:02.729 04:23:59 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:02.729 04:23:59 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:07:02.729 04:23:59 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:07:02.729 04:23:59 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:02.729 04:23:59 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:02.729 04:23:59 app_cmdline -- scripts/common.sh@368 -- # return 0 00:07:02.729 04:23:59 app_cmdline -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:02.729 04:23:59 app_cmdline -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:02.729 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:02.729 --rc genhtml_branch_coverage=1 00:07:02.729 --rc genhtml_function_coverage=1 00:07:02.729 --rc genhtml_legend=1 00:07:02.729 --rc geninfo_all_blocks=1 00:07:02.729 --rc geninfo_unexecuted_blocks=1 00:07:02.729 00:07:02.729 ' 00:07:02.729 04:23:59 app_cmdline -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:02.729 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:02.730 --rc genhtml_branch_coverage=1 00:07:02.730 --rc genhtml_function_coverage=1 00:07:02.730 --rc genhtml_legend=1 00:07:02.730 --rc geninfo_all_blocks=1 00:07:02.730 --rc geninfo_unexecuted_blocks=1 00:07:02.730 00:07:02.730 ' 00:07:02.730 04:23:59 app_cmdline -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:02.730 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:02.730 --rc genhtml_branch_coverage=1 00:07:02.730 --rc genhtml_function_coverage=1 00:07:02.730 --rc genhtml_legend=1 00:07:02.730 --rc geninfo_all_blocks=1 00:07:02.730 --rc geninfo_unexecuted_blocks=1 00:07:02.730 00:07:02.730 ' 00:07:02.730 04:23:59 app_cmdline -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:02.730 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:02.730 --rc genhtml_branch_coverage=1 00:07:02.730 --rc genhtml_function_coverage=1 00:07:02.730 --rc genhtml_legend=1 00:07:02.730 --rc geninfo_all_blocks=1 00:07:02.730 --rc geninfo_unexecuted_blocks=1 00:07:02.730 00:07:02.730 ' 00:07:02.730 04:23:59 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:02.730 04:23:59 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=59659 00:07:02.730 04:23:59 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 59659 00:07:02.730 04:23:59 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 59659 ']' 00:07:02.730 04:23:59 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:02.730 04:23:59 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:02.730 04:23:59 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:02.730 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:02.730 04:23:59 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:02.730 04:23:59 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:02.730 04:23:59 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:02.730 [2024-11-27 04:23:59.309413] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:07:02.730 [2024-11-27 04:23:59.309507] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59659 ] 00:07:02.992 [2024-11-27 04:23:59.459566] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.992 [2024-11-27 04:23:59.535816] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.566 04:24:00 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:03.566 04:24:00 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:07:03.566 04:24:00 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:07:03.827 { 00:07:03.827 "version": "SPDK v25.01-pre git sha1 2f2acf4eb", 00:07:03.827 "fields": { 00:07:03.827 "major": 25, 00:07:03.827 "minor": 1, 00:07:03.827 "patch": 0, 00:07:03.827 "suffix": "-pre", 00:07:03.827 "commit": "2f2acf4eb" 00:07:03.827 } 00:07:03.827 } 00:07:03.827 04:24:00 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:07:03.827 04:24:00 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:03.827 04:24:00 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:03.827 04:24:00 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:03.827 04:24:00 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:03.827 04:24:00 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:03.827 04:24:00 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:03.827 04:24:00 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:03.827 04:24:00 app_cmdline -- app/cmdline.sh@26 -- # sort 00:07:03.827 04:24:00 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:03.828 04:24:00 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:03.828 04:24:00 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:03.828 04:24:00 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:03.828 04:24:00 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:07:03.828 04:24:00 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:03.828 04:24:00 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:03.828 04:24:00 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:03.828 04:24:00 app_cmdline -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:03.828 04:24:00 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:03.828 04:24:00 app_cmdline -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:03.828 04:24:00 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:03.828 04:24:00 app_cmdline -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:03.828 04:24:00 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:07:03.828 04:24:00 app_cmdline -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:04.089 request: 00:07:04.089 { 00:07:04.089 "method": "env_dpdk_get_mem_stats", 00:07:04.089 "req_id": 1 00:07:04.089 } 00:07:04.089 Got JSON-RPC error response 00:07:04.089 response: 00:07:04.089 { 00:07:04.089 "code": -32601, 00:07:04.089 "message": "Method not found" 00:07:04.089 } 00:07:04.089 04:24:00 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:07:04.089 04:24:00 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:04.089 04:24:00 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:04.089 04:24:00 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:04.089 04:24:00 app_cmdline -- app/cmdline.sh@1 -- # killprocess 59659 00:07:04.089 04:24:00 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 59659 ']' 00:07:04.089 04:24:00 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 59659 00:07:04.089 04:24:00 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:07:04.089 04:24:00 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:04.089 04:24:00 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59659 00:07:04.089 killing process with pid 59659 00:07:04.089 04:24:00 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:04.089 04:24:00 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:04.089 04:24:00 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59659' 00:07:04.089 04:24:00 app_cmdline -- common/autotest_common.sh@973 -- # kill 59659 00:07:04.089 04:24:00 app_cmdline -- common/autotest_common.sh@978 -- # wait 59659 00:07:05.477 ************************************ 00:07:05.477 END TEST app_cmdline 00:07:05.477 ************************************ 00:07:05.477 00:07:05.477 real 0m2.640s 00:07:05.477 user 0m2.926s 00:07:05.477 sys 0m0.406s 00:07:05.477 04:24:01 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:05.477 04:24:01 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:05.477 04:24:01 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:05.477 04:24:01 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:05.477 04:24:01 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:05.477 04:24:01 -- common/autotest_common.sh@10 -- # set +x 00:07:05.477 ************************************ 00:07:05.477 START TEST version 00:07:05.477 ************************************ 00:07:05.477 04:24:01 version -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:05.477 * Looking for test storage... 00:07:05.477 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:05.477 04:24:01 version -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:05.477 04:24:01 version -- common/autotest_common.sh@1693 -- # lcov --version 00:07:05.477 04:24:01 version -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:05.477 04:24:01 version -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:05.477 04:24:01 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:05.477 04:24:01 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:05.477 04:24:01 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:05.477 04:24:01 version -- scripts/common.sh@336 -- # IFS=.-: 00:07:05.477 04:24:01 version -- scripts/common.sh@336 -- # read -ra ver1 00:07:05.477 04:24:01 version -- scripts/common.sh@337 -- # IFS=.-: 00:07:05.477 04:24:01 version -- scripts/common.sh@337 -- # read -ra ver2 00:07:05.477 04:24:01 version -- scripts/common.sh@338 -- # local 'op=<' 00:07:05.477 04:24:01 version -- scripts/common.sh@340 -- # ver1_l=2 00:07:05.477 04:24:01 version -- scripts/common.sh@341 -- # ver2_l=1 00:07:05.477 04:24:01 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:05.477 04:24:01 version -- scripts/common.sh@344 -- # case "$op" in 00:07:05.477 04:24:01 version -- scripts/common.sh@345 -- # : 1 00:07:05.477 04:24:01 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:05.477 04:24:01 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:05.477 04:24:01 version -- scripts/common.sh@365 -- # decimal 1 00:07:05.477 04:24:01 version -- scripts/common.sh@353 -- # local d=1 00:07:05.477 04:24:01 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:05.477 04:24:01 version -- scripts/common.sh@355 -- # echo 1 00:07:05.477 04:24:01 version -- scripts/common.sh@365 -- # ver1[v]=1 00:07:05.477 04:24:01 version -- scripts/common.sh@366 -- # decimal 2 00:07:05.477 04:24:01 version -- scripts/common.sh@353 -- # local d=2 00:07:05.477 04:24:01 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:05.477 04:24:01 version -- scripts/common.sh@355 -- # echo 2 00:07:05.477 04:24:01 version -- scripts/common.sh@366 -- # ver2[v]=2 00:07:05.477 04:24:01 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:05.477 04:24:01 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:05.477 04:24:01 version -- scripts/common.sh@368 -- # return 0 00:07:05.477 04:24:01 version -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:05.477 04:24:01 version -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:05.477 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:05.477 --rc genhtml_branch_coverage=1 00:07:05.477 --rc genhtml_function_coverage=1 00:07:05.477 --rc genhtml_legend=1 00:07:05.477 --rc geninfo_all_blocks=1 00:07:05.477 --rc geninfo_unexecuted_blocks=1 00:07:05.477 00:07:05.477 ' 00:07:05.477 04:24:01 version -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:05.477 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:05.477 --rc genhtml_branch_coverage=1 00:07:05.477 --rc genhtml_function_coverage=1 00:07:05.477 --rc genhtml_legend=1 00:07:05.477 --rc geninfo_all_blocks=1 00:07:05.477 --rc geninfo_unexecuted_blocks=1 00:07:05.477 00:07:05.477 ' 00:07:05.477 04:24:01 version -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:05.477 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:05.477 --rc genhtml_branch_coverage=1 00:07:05.477 --rc genhtml_function_coverage=1 00:07:05.477 --rc genhtml_legend=1 00:07:05.477 --rc geninfo_all_blocks=1 00:07:05.477 --rc geninfo_unexecuted_blocks=1 00:07:05.477 00:07:05.477 ' 00:07:05.477 04:24:01 version -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:05.477 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:05.477 --rc genhtml_branch_coverage=1 00:07:05.477 --rc genhtml_function_coverage=1 00:07:05.477 --rc genhtml_legend=1 00:07:05.477 --rc geninfo_all_blocks=1 00:07:05.477 --rc geninfo_unexecuted_blocks=1 00:07:05.477 00:07:05.478 ' 00:07:05.478 04:24:01 version -- app/version.sh@17 -- # get_header_version major 00:07:05.478 04:24:01 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:05.478 04:24:01 version -- app/version.sh@14 -- # cut -f2 00:07:05.478 04:24:01 version -- app/version.sh@14 -- # tr -d '"' 00:07:05.478 04:24:01 version -- app/version.sh@17 -- # major=25 00:07:05.478 04:24:01 version -- app/version.sh@18 -- # get_header_version minor 00:07:05.478 04:24:01 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:05.478 04:24:01 version -- app/version.sh@14 -- # cut -f2 00:07:05.478 04:24:01 version -- app/version.sh@14 -- # tr -d '"' 00:07:05.478 04:24:01 version -- app/version.sh@18 -- # minor=1 00:07:05.478 04:24:01 version -- app/version.sh@19 -- # get_header_version patch 00:07:05.478 04:24:01 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:05.478 04:24:01 version -- app/version.sh@14 -- # tr -d '"' 00:07:05.478 04:24:01 version -- app/version.sh@14 -- # cut -f2 00:07:05.478 04:24:01 version -- app/version.sh@19 -- # patch=0 00:07:05.478 04:24:01 version -- app/version.sh@20 -- # get_header_version suffix 00:07:05.478 04:24:01 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:05.478 04:24:01 version -- app/version.sh@14 -- # cut -f2 00:07:05.478 04:24:01 version -- app/version.sh@14 -- # tr -d '"' 00:07:05.478 04:24:01 version -- app/version.sh@20 -- # suffix=-pre 00:07:05.478 04:24:01 version -- app/version.sh@22 -- # version=25.1 00:07:05.478 04:24:01 version -- app/version.sh@25 -- # (( patch != 0 )) 00:07:05.478 04:24:01 version -- app/version.sh@28 -- # version=25.1rc0 00:07:05.478 04:24:01 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:07:05.478 04:24:01 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:05.478 04:24:01 version -- app/version.sh@30 -- # py_version=25.1rc0 00:07:05.478 04:24:01 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:07:05.478 ************************************ 00:07:05.478 END TEST version 00:07:05.478 ************************************ 00:07:05.478 00:07:05.478 real 0m0.205s 00:07:05.478 user 0m0.129s 00:07:05.478 sys 0m0.100s 00:07:05.478 04:24:01 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:05.478 04:24:01 version -- common/autotest_common.sh@10 -- # set +x 00:07:05.478 04:24:02 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:07:05.478 04:24:02 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:07:05.478 04:24:02 -- spdk/autotest.sh@194 -- # uname -s 00:07:05.478 04:24:02 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:07:05.478 04:24:02 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:07:05.478 04:24:02 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:07:05.478 04:24:02 -- spdk/autotest.sh@207 -- # '[' 1 -eq 1 ']' 00:07:05.478 04:24:02 -- spdk/autotest.sh@208 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:07:05.478 04:24:02 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:05.478 04:24:02 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:05.478 04:24:02 -- common/autotest_common.sh@10 -- # set +x 00:07:05.737 ************************************ 00:07:05.737 START TEST blockdev_nvme 00:07:05.737 ************************************ 00:07:05.737 04:24:02 blockdev_nvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:07:05.737 * Looking for test storage... 00:07:05.737 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:07:05.737 04:24:02 blockdev_nvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:05.737 04:24:02 blockdev_nvme -- common/autotest_common.sh@1693 -- # lcov --version 00:07:05.737 04:24:02 blockdev_nvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:05.737 04:24:02 blockdev_nvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:05.737 04:24:02 blockdev_nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:05.737 04:24:02 blockdev_nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:05.737 04:24:02 blockdev_nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:05.737 04:24:02 blockdev_nvme -- scripts/common.sh@336 -- # IFS=.-: 00:07:05.737 04:24:02 blockdev_nvme -- scripts/common.sh@336 -- # read -ra ver1 00:07:05.737 04:24:02 blockdev_nvme -- scripts/common.sh@337 -- # IFS=.-: 00:07:05.737 04:24:02 blockdev_nvme -- scripts/common.sh@337 -- # read -ra ver2 00:07:05.737 04:24:02 blockdev_nvme -- scripts/common.sh@338 -- # local 'op=<' 00:07:05.737 04:24:02 blockdev_nvme -- scripts/common.sh@340 -- # ver1_l=2 00:07:05.737 04:24:02 blockdev_nvme -- scripts/common.sh@341 -- # ver2_l=1 00:07:05.737 04:24:02 blockdev_nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:05.737 04:24:02 blockdev_nvme -- scripts/common.sh@344 -- # case "$op" in 00:07:05.737 04:24:02 blockdev_nvme -- scripts/common.sh@345 -- # : 1 00:07:05.737 04:24:02 blockdev_nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:05.737 04:24:02 blockdev_nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:05.737 04:24:02 blockdev_nvme -- scripts/common.sh@365 -- # decimal 1 00:07:05.737 04:24:02 blockdev_nvme -- scripts/common.sh@353 -- # local d=1 00:07:05.737 04:24:02 blockdev_nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:05.737 04:24:02 blockdev_nvme -- scripts/common.sh@355 -- # echo 1 00:07:05.737 04:24:02 blockdev_nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:07:05.737 04:24:02 blockdev_nvme -- scripts/common.sh@366 -- # decimal 2 00:07:05.737 04:24:02 blockdev_nvme -- scripts/common.sh@353 -- # local d=2 00:07:05.737 04:24:02 blockdev_nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:05.737 04:24:02 blockdev_nvme -- scripts/common.sh@355 -- # echo 2 00:07:05.737 04:24:02 blockdev_nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:07:05.737 04:24:02 blockdev_nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:05.737 04:24:02 blockdev_nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:05.737 04:24:02 blockdev_nvme -- scripts/common.sh@368 -- # return 0 00:07:05.737 04:24:02 blockdev_nvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:05.737 04:24:02 blockdev_nvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:05.737 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:05.737 --rc genhtml_branch_coverage=1 00:07:05.737 --rc genhtml_function_coverage=1 00:07:05.737 --rc genhtml_legend=1 00:07:05.737 --rc geninfo_all_blocks=1 00:07:05.737 --rc geninfo_unexecuted_blocks=1 00:07:05.737 00:07:05.737 ' 00:07:05.737 04:24:02 blockdev_nvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:05.737 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:05.737 --rc genhtml_branch_coverage=1 00:07:05.737 --rc genhtml_function_coverage=1 00:07:05.737 --rc genhtml_legend=1 00:07:05.737 --rc geninfo_all_blocks=1 00:07:05.737 --rc geninfo_unexecuted_blocks=1 00:07:05.737 00:07:05.737 ' 00:07:05.737 04:24:02 blockdev_nvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:05.737 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:05.737 --rc genhtml_branch_coverage=1 00:07:05.737 --rc genhtml_function_coverage=1 00:07:05.737 --rc genhtml_legend=1 00:07:05.737 --rc geninfo_all_blocks=1 00:07:05.737 --rc geninfo_unexecuted_blocks=1 00:07:05.737 00:07:05.737 ' 00:07:05.737 04:24:02 blockdev_nvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:05.737 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:05.737 --rc genhtml_branch_coverage=1 00:07:05.737 --rc genhtml_function_coverage=1 00:07:05.737 --rc genhtml_legend=1 00:07:05.737 --rc geninfo_all_blocks=1 00:07:05.737 --rc geninfo_unexecuted_blocks=1 00:07:05.737 00:07:05.737 ' 00:07:05.737 04:24:02 blockdev_nvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:07:05.737 04:24:02 blockdev_nvme -- bdev/nbd_common.sh@6 -- # set -e 00:07:05.737 04:24:02 blockdev_nvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:07:05.737 04:24:02 blockdev_nvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:05.737 04:24:02 blockdev_nvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:07:05.737 04:24:02 blockdev_nvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:07:05.737 04:24:02 blockdev_nvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:07:05.737 04:24:02 blockdev_nvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:07:05.737 04:24:02 blockdev_nvme -- bdev/blockdev.sh@20 -- # : 00:07:05.738 04:24:02 blockdev_nvme -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:07:05.738 04:24:02 blockdev_nvme -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:07:05.738 04:24:02 blockdev_nvme -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:07:05.738 04:24:02 blockdev_nvme -- bdev/blockdev.sh@711 -- # uname -s 00:07:05.738 04:24:02 blockdev_nvme -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:07:05.738 04:24:02 blockdev_nvme -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:07:05.738 04:24:02 blockdev_nvme -- bdev/blockdev.sh@719 -- # test_type=nvme 00:07:05.738 04:24:02 blockdev_nvme -- bdev/blockdev.sh@720 -- # crypto_device= 00:07:05.738 04:24:02 blockdev_nvme -- bdev/blockdev.sh@721 -- # dek= 00:07:05.738 04:24:02 blockdev_nvme -- bdev/blockdev.sh@722 -- # env_ctx= 00:07:05.738 04:24:02 blockdev_nvme -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:07:05.738 04:24:02 blockdev_nvme -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:07:05.738 04:24:02 blockdev_nvme -- bdev/blockdev.sh@727 -- # [[ nvme == bdev ]] 00:07:05.738 04:24:02 blockdev_nvme -- bdev/blockdev.sh@727 -- # [[ nvme == crypto_* ]] 00:07:05.738 04:24:02 blockdev_nvme -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:07:05.738 04:24:02 blockdev_nvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=59828 00:07:05.738 04:24:02 blockdev_nvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:07:05.738 04:24:02 blockdev_nvme -- bdev/blockdev.sh@49 -- # waitforlisten 59828 00:07:05.738 04:24:02 blockdev_nvme -- common/autotest_common.sh@835 -- # '[' -z 59828 ']' 00:07:05.738 04:24:02 blockdev_nvme -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:05.738 04:24:02 blockdev_nvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:07:05.738 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:05.738 04:24:02 blockdev_nvme -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:05.738 04:24:02 blockdev_nvme -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:05.738 04:24:02 blockdev_nvme -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:05.738 04:24:02 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:05.738 [2024-11-27 04:24:02.307578] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:07:05.738 [2024-11-27 04:24:02.307828] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59828 ] 00:07:05.995 [2024-11-27 04:24:02.467832] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:05.995 [2024-11-27 04:24:02.568247] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.585 04:24:03 blockdev_nvme -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:06.585 04:24:03 blockdev_nvme -- common/autotest_common.sh@868 -- # return 0 00:07:06.585 04:24:03 blockdev_nvme -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:07:06.585 04:24:03 blockdev_nvme -- bdev/blockdev.sh@736 -- # setup_nvme_conf 00:07:06.585 04:24:03 blockdev_nvme -- bdev/blockdev.sh@81 -- # local json 00:07:06.585 04:24:03 blockdev_nvme -- bdev/blockdev.sh@82 -- # mapfile -t json 00:07:06.585 04:24:03 blockdev_nvme -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:06.846 04:24:03 blockdev_nvme -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:07:06.846 04:24:03 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:06.846 04:24:03 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:07.106 04:24:03 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:07.106 04:24:03 blockdev_nvme -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:07:07.106 04:24:03 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:07.106 04:24:03 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:07.106 04:24:03 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:07.106 04:24:03 blockdev_nvme -- bdev/blockdev.sh@777 -- # cat 00:07:07.106 04:24:03 blockdev_nvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:07:07.106 04:24:03 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:07.106 04:24:03 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:07.106 04:24:03 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:07.107 04:24:03 blockdev_nvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:07:07.107 04:24:03 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:07.107 04:24:03 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:07.107 04:24:03 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:07.107 04:24:03 blockdev_nvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:07:07.107 04:24:03 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:07.107 04:24:03 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:07.107 04:24:03 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:07.107 04:24:03 blockdev_nvme -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:07:07.107 04:24:03 blockdev_nvme -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:07:07.107 04:24:03 blockdev_nvme -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:07:07.107 04:24:03 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:07.107 04:24:03 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:07.107 04:24:03 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:07.107 04:24:03 blockdev_nvme -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:07:07.107 04:24:03 blockdev_nvme -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "0f9408ec-0fb9-413c-ab68-d39e7c09a347"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "0f9408ec-0fb9-413c-ab68-d39e7c09a347",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "2b1d4711-447c-4176-aaff-93d278c521c6"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "2b1d4711-447c-4176-aaff-93d278c521c6",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:11.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:11.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "3e234c6a-b047-4f03-a059-4caa525b1622"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "3e234c6a-b047-4f03-a059-4caa525b1622",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "23251d5a-62c1-4140-94b2-3e5f73a12a15"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "23251d5a-62c1-4140-94b2-3e5f73a12a15",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "68753f99-c82b-4ab2-a8a1-f32cb9c6b344"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "68753f99-c82b-4ab2-a8a1-f32cb9c6b344",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "be3b5dc3-c09b-4fdb-858d-6d7c37dbb609"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "be3b5dc3-c09b-4fdb-858d-6d7c37dbb609",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:07:07.107 04:24:03 blockdev_nvme -- bdev/blockdev.sh@786 -- # jq -r .name 00:07:07.107 04:24:03 blockdev_nvme -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:07:07.107 04:24:03 blockdev_nvme -- bdev/blockdev.sh@789 -- # hello_world_bdev=Nvme0n1 00:07:07.107 04:24:03 blockdev_nvme -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:07:07.107 04:24:03 blockdev_nvme -- bdev/blockdev.sh@791 -- # killprocess 59828 00:07:07.107 04:24:03 blockdev_nvme -- common/autotest_common.sh@954 -- # '[' -z 59828 ']' 00:07:07.107 04:24:03 blockdev_nvme -- common/autotest_common.sh@958 -- # kill -0 59828 00:07:07.107 04:24:03 blockdev_nvme -- common/autotest_common.sh@959 -- # uname 00:07:07.107 04:24:03 blockdev_nvme -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:07.107 04:24:03 blockdev_nvme -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59828 00:07:07.107 killing process with pid 59828 00:07:07.107 04:24:03 blockdev_nvme -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:07.107 04:24:03 blockdev_nvme -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:07.107 04:24:03 blockdev_nvme -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59828' 00:07:07.107 04:24:03 blockdev_nvme -- common/autotest_common.sh@973 -- # kill 59828 00:07:07.107 04:24:03 blockdev_nvme -- common/autotest_common.sh@978 -- # wait 59828 00:07:09.017 04:24:05 blockdev_nvme -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:07:09.017 04:24:05 blockdev_nvme -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:07:09.017 04:24:05 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:07:09.017 04:24:05 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:09.017 04:24:05 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:09.017 ************************************ 00:07:09.017 START TEST bdev_hello_world 00:07:09.017 ************************************ 00:07:09.017 04:24:05 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:07:09.017 [2024-11-27 04:24:05.152648] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:07:09.017 [2024-11-27 04:24:05.152928] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59910 ] 00:07:09.017 [2024-11-27 04:24:05.307550] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:09.017 [2024-11-27 04:24:05.386936] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.589 [2024-11-27 04:24:05.879816] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:07:09.589 [2024-11-27 04:24:05.879853] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:07:09.589 [2024-11-27 04:24:05.879868] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:07:09.589 [2024-11-27 04:24:05.881810] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:07:09.589 [2024-11-27 04:24:05.882220] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:07:09.589 [2024-11-27 04:24:05.882243] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:07:09.589 [2024-11-27 04:24:05.882382] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:07:09.589 00:07:09.589 [2024-11-27 04:24:05.882400] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:07:10.159 00:07:10.159 real 0m1.355s 00:07:10.159 user 0m1.096s 00:07:10.159 sys 0m0.153s 00:07:10.159 ************************************ 00:07:10.159 END TEST bdev_hello_world 00:07:10.159 ************************************ 00:07:10.159 04:24:06 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:10.159 04:24:06 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:07:10.159 04:24:06 blockdev_nvme -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:07:10.159 04:24:06 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:10.159 04:24:06 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:10.159 04:24:06 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:10.159 ************************************ 00:07:10.159 START TEST bdev_bounds 00:07:10.159 ************************************ 00:07:10.159 04:24:06 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:07:10.159 04:24:06 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=59946 00:07:10.159 04:24:06 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:07:10.159 04:24:06 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:07:10.159 Process bdevio pid: 59946 00:07:10.159 04:24:06 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 59946' 00:07:10.159 04:24:06 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 59946 00:07:10.159 04:24:06 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 59946 ']' 00:07:10.159 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:10.159 04:24:06 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:10.159 04:24:06 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:10.159 04:24:06 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:10.159 04:24:06 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:10.159 04:24:06 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:07:10.159 [2024-11-27 04:24:06.586716] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:07:10.159 [2024-11-27 04:24:06.587446] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59946 ] 00:07:10.419 [2024-11-27 04:24:06.759989] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:10.419 [2024-11-27 04:24:06.839749] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:10.419 [2024-11-27 04:24:06.839825] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:10.419 [2024-11-27 04:24:06.839936] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.991 04:24:07 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:10.992 04:24:07 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:07:10.992 04:24:07 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:07:10.992 I/O targets: 00:07:10.992 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:07:10.992 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:07:10.992 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:10.992 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:10.992 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:10.992 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:07:10.992 00:07:10.992 00:07:10.992 CUnit - A unit testing framework for C - Version 2.1-3 00:07:10.992 http://cunit.sourceforge.net/ 00:07:10.992 00:07:10.992 00:07:10.992 Suite: bdevio tests on: Nvme3n1 00:07:10.992 Test: blockdev write read block ...passed 00:07:10.992 Test: blockdev write zeroes read block ...passed 00:07:10.992 Test: blockdev write zeroes read no split ...passed 00:07:10.992 Test: blockdev write zeroes read split ...passed 00:07:10.992 Test: blockdev write zeroes read split partial ...passed 00:07:10.992 Test: blockdev reset ...[2024-11-27 04:24:07.545891] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 00:07:10.992 passed 00:07:10.992 Test: blockdev write read 8 blocks ...[2024-11-27 04:24:07.548737] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller successful. 00:07:10.992 passed 00:07:10.992 Test: blockdev write read size > 128k ...passed 00:07:10.992 Test: blockdev write read invalid size ...passed 00:07:10.992 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:10.992 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:10.992 Test: blockdev write read max offset ...passed 00:07:10.992 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:10.992 Test: blockdev writev readv 8 blocks ...passed 00:07:10.992 Test: blockdev writev readv 30 x 1block ...passed 00:07:10.992 Test: blockdev writev readv block ...passed 00:07:10.992 Test: blockdev writev readv size > 128k ...passed 00:07:10.992 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:10.992 Test: blockdev comparev and writev ...[2024-11-27 04:24:07.557292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 passed 00:07:10.992 Test: blockdev nvme passthru rw ...SGL DATA BLOCK ADDRESS 0x2b480a000 len:0x1000 00:07:10.992 [2024-11-27 04:24:07.557631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:10.992 passed 00:07:10.992 Test: blockdev nvme passthru vendor specific ...[2024-11-27 04:24:07.559065] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:10.992 passed 00:07:10.992 Test: blockdev nvme admin passthru ...[2024-11-27 04:24:07.559337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:10.992 passed 00:07:10.992 Test: blockdev copy ...passed 00:07:10.992 Suite: bdevio tests on: Nvme2n3 00:07:10.992 Test: blockdev write read block ...passed 00:07:10.992 Test: blockdev write zeroes read block ...passed 00:07:10.992 Test: blockdev write zeroes read no split ...passed 00:07:11.254 Test: blockdev write zeroes read split ...passed 00:07:11.254 Test: blockdev write zeroes read split partial ...passed 00:07:11.254 Test: blockdev reset ...[2024-11-27 04:24:07.614016] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:07:11.254 [2024-11-27 04:24:07.617003] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:07:11.254 passed 00:07:11.254 Test: blockdev write read 8 blocks ...passed 00:07:11.254 Test: blockdev write read size > 128k ...passed 00:07:11.254 Test: blockdev write read invalid size ...passed 00:07:11.254 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:11.254 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:11.254 Test: blockdev write read max offset ...passed 00:07:11.254 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:11.254 Test: blockdev writev readv 8 blocks ...passed 00:07:11.254 Test: blockdev writev readv 30 x 1block ...passed 00:07:11.254 Test: blockdev writev readv block ...passed 00:07:11.254 Test: blockdev writev readv size > 128k ...passed 00:07:11.254 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:11.254 Test: blockdev comparev and writev ...[2024-11-27 04:24:07.623369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x297a06000 len:0x1000 00:07:11.254 [2024-11-27 04:24:07.623407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:11.254 passed 00:07:11.254 Test: blockdev nvme passthru rw ...passed 00:07:11.254 Test: blockdev nvme passthru vendor specific ...passed 00:07:11.254 Test: blockdev nvme admin passthru ...[2024-11-27 04:24:07.623894] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:11.254 [2024-11-27 04:24:07.623920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:11.254 passed 00:07:11.254 Test: blockdev copy ...passed 00:07:11.254 Suite: bdevio tests on: Nvme2n2 00:07:11.254 Test: blockdev write read block ...passed 00:07:11.254 Test: blockdev write zeroes read block ...passed 00:07:11.254 Test: blockdev write zeroes read no split ...passed 00:07:11.254 Test: blockdev write zeroes read split ...passed 00:07:11.254 Test: blockdev write zeroes read split partial ...passed 00:07:11.254 Test: blockdev reset ...[2024-11-27 04:24:07.665641] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:07:11.254 [2024-11-27 04:24:07.668371] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:07:11.254 passed 00:07:11.254 Test: blockdev write read 8 blocks ...passed 00:07:11.254 Test: blockdev write read size > 128k ...passed 00:07:11.254 Test: blockdev write read invalid size ...passed 00:07:11.254 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:11.254 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:11.254 Test: blockdev write read max offset ...passed 00:07:11.254 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:11.254 Test: blockdev writev readv 8 blocks ...passed 00:07:11.254 Test: blockdev writev readv 30 x 1block ...passed 00:07:11.254 Test: blockdev writev readv block ...passed 00:07:11.254 Test: blockdev writev readv size > 128k ...passed 00:07:11.254 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:11.254 Test: blockdev comparev and writev ...[2024-11-27 04:24:07.675646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2cf43c000 len:0x1000 00:07:11.254 [2024-11-27 04:24:07.675783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:11.254 passed 00:07:11.254 Test: blockdev nvme passthru rw ...passed 00:07:11.254 Test: blockdev nvme passthru vendor specific ...[2024-11-27 04:24:07.676418] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:11.254 passed[2024-11-27 04:24:07.676515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:11.254 00:07:11.254 Test: blockdev nvme admin passthru ...passed 00:07:11.254 Test: blockdev copy ...passed 00:07:11.254 Suite: bdevio tests on: Nvme2n1 00:07:11.254 Test: blockdev write read block ...passed 00:07:11.254 Test: blockdev write zeroes read block ...passed 00:07:11.254 Test: blockdev write zeroes read no split ...passed 00:07:11.254 Test: blockdev write zeroes read split ...passed 00:07:11.254 Test: blockdev write zeroes read split partial ...passed 00:07:11.254 Test: blockdev reset ...[2024-11-27 04:24:07.731032] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:07:11.254 [2024-11-27 04:24:07.733768] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:07:11.254 passed 00:07:11.254 Test: blockdev write read 8 blocks ...passed 00:07:11.254 Test: blockdev write read size > 128k ...passed 00:07:11.254 Test: blockdev write read invalid size ...passed 00:07:11.254 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:11.254 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:11.254 Test: blockdev write read max offset ...passed 00:07:11.254 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:11.254 Test: blockdev writev readv 8 blocks ...passed 00:07:11.254 Test: blockdev writev readv 30 x 1block ...passed 00:07:11.254 Test: blockdev writev readv block ...passed 00:07:11.254 Test: blockdev writev readv size > 128k ...passed 00:07:11.254 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:11.254 Test: blockdev comparev and writev ...[2024-11-27 04:24:07.740958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2cf438000 len:0x1000 00:07:11.254 [2024-11-27 04:24:07.741171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:11.254 passed 00:07:11.254 Test: blockdev nvme passthru rw ...passed 00:07:11.254 Test: blockdev nvme passthru vendor specific ...[2024-11-27 04:24:07.741907] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:11.254 [2024-11-27 04:24:07.742052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0passed 00:07:11.254 Test: blockdev nvme admin passthru ... sqhd:001c p:1 m:0 dnr:1 00:07:11.254 passed 00:07:11.254 Test: blockdev copy ...passed 00:07:11.254 Suite: bdevio tests on: Nvme1n1 00:07:11.254 Test: blockdev write read block ...passed 00:07:11.254 Test: blockdev write zeroes read block ...passed 00:07:11.254 Test: blockdev write zeroes read no split ...passed 00:07:11.254 Test: blockdev write zeroes read split ...passed 00:07:11.254 Test: blockdev write zeroes read split partial ...passed 00:07:11.254 Test: blockdev reset ...[2024-11-27 04:24:07.797770] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:07:11.254 [2024-11-27 04:24:07.800500] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:07:11.254 passed 00:07:11.254 Test: blockdev write read 8 blocks ...passed 00:07:11.254 Test: blockdev write read size > 128k ...passed 00:07:11.254 Test: blockdev write read invalid size ...passed 00:07:11.254 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:11.254 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:11.254 Test: blockdev write read max offset ...passed 00:07:11.254 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:11.254 Test: blockdev writev readv 8 blocks ...passed 00:07:11.254 Test: blockdev writev readv 30 x 1block ...passed 00:07:11.254 Test: blockdev writev readv block ...passed 00:07:11.254 Test: blockdev writev readv size > 128k ...passed 00:07:11.254 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:11.254 Test: blockdev comparev and writev ...[2024-11-27 04:24:07.807775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2cf434000 len:0x1000 00:07:11.254 [2024-11-27 04:24:07.807910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:passed 00:07:11.255 Test: blockdev nvme passthru rw ...passed 00:07:11.255 Test: blockdev nvme passthru vendor specific ...passed 00:07:11.255 Test: blockdev nvme admin passthru ...0 sqhd:0018 p:1 m:0 dnr:1 00:07:11.255 [2024-11-27 04:24:07.808432] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:11.255 [2024-11-27 04:24:07.808461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:11.255 passed 00:07:11.255 Test: blockdev copy ...passed 00:07:11.255 Suite: bdevio tests on: Nvme0n1 00:07:11.255 Test: blockdev write read block ...passed 00:07:11.255 Test: blockdev write zeroes read block ...passed 00:07:11.255 Test: blockdev write zeroes read no split ...passed 00:07:11.517 Test: blockdev write zeroes read split ...passed 00:07:11.517 Test: blockdev write zeroes read split partial ...passed 00:07:11.517 Test: blockdev reset ...[2024-11-27 04:24:07.863942] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:07:11.517 [2024-11-27 04:24:07.866430] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:07:11.517 passed 00:07:11.517 Test: blockdev write read 8 blocks ...passed 00:07:11.517 Test: blockdev write read size > 128k ...passed 00:07:11.517 Test: blockdev write read invalid size ...passed 00:07:11.517 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:11.517 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:11.517 Test: blockdev write read max offset ...passed 00:07:11.517 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:11.517 Test: blockdev writev readv 8 blocks ...passed 00:07:11.517 Test: blockdev writev readv 30 x 1block ...passed 00:07:11.517 Test: blockdev writev readv block ...passed 00:07:11.517 Test: blockdev writev readv size > 128k ...passed 00:07:11.517 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:11.517 Test: blockdev comparev and writev ...passed 00:07:11.517 Test: blockdev nvme passthru rw ...[2024-11-27 04:24:07.873089] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:07:11.517 separate metadata which is not supported yet. 00:07:11.517 passed 00:07:11.517 Test: blockdev nvme passthru vendor specific ...passed 00:07:11.517 Test: blockdev nvme admin passthru ...[2024-11-27 04:24:07.873739] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:07:11.517 [2024-11-27 04:24:07.873792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:07:11.517 passed 00:07:11.517 Test: blockdev copy ...passed 00:07:11.517 00:07:11.517 Run Summary: Type Total Ran Passed Failed Inactive 00:07:11.517 suites 6 6 n/a 0 0 00:07:11.517 tests 138 138 138 0 0 00:07:11.517 asserts 893 893 893 0 n/a 00:07:11.517 00:07:11.517 Elapsed time = 0.976 seconds 00:07:11.517 0 00:07:11.517 04:24:07 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 59946 00:07:11.517 04:24:07 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 59946 ']' 00:07:11.517 04:24:07 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 59946 00:07:11.517 04:24:07 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:07:11.517 04:24:07 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:11.517 04:24:07 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59946 00:07:11.517 04:24:07 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:11.517 04:24:07 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:11.518 04:24:07 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59946' 00:07:11.518 killing process with pid 59946 00:07:11.518 04:24:07 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@973 -- # kill 59946 00:07:11.518 04:24:07 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@978 -- # wait 59946 00:07:12.087 04:24:08 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:07:12.087 00:07:12.087 real 0m1.948s 00:07:12.087 user 0m4.913s 00:07:12.087 sys 0m0.293s 00:07:12.087 04:24:08 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:12.087 ************************************ 00:07:12.087 END TEST bdev_bounds 00:07:12.087 04:24:08 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:07:12.087 ************************************ 00:07:12.087 04:24:08 blockdev_nvme -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:07:12.087 04:24:08 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:12.087 04:24:08 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:12.087 04:24:08 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:12.087 ************************************ 00:07:12.087 START TEST bdev_nbd 00:07:12.087 ************************************ 00:07:12.087 04:24:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:07:12.087 04:24:08 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:07:12.087 04:24:08 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:07:12.087 04:24:08 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:12.087 04:24:08 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:12.087 04:24:08 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:12.087 04:24:08 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:07:12.087 04:24:08 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:07:12.087 04:24:08 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:07:12.087 04:24:08 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:07:12.087 04:24:08 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:07:12.087 04:24:08 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:07:12.087 04:24:08 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:12.087 04:24:08 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:07:12.087 04:24:08 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:12.087 04:24:08 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:07:12.087 04:24:08 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=60000 00:07:12.087 04:24:08 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:07:12.087 04:24:08 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 60000 /var/tmp/spdk-nbd.sock 00:07:12.087 04:24:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 60000 ']' 00:07:12.087 04:24:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:12.087 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:12.087 04:24:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:12.087 04:24:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:12.087 04:24:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:12.087 04:24:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:07:12.087 04:24:08 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:07:12.087 [2024-11-27 04:24:08.559674] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:07:12.087 [2024-11-27 04:24:08.559813] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:12.346 [2024-11-27 04:24:08.723271] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:12.346 [2024-11-27 04:24:08.802166] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.916 04:24:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:12.916 04:24:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:07:12.916 04:24:09 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:07:12.916 04:24:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:12.916 04:24:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:12.916 04:24:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:07:12.916 04:24:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:07:12.916 04:24:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:12.916 04:24:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:12.916 04:24:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:07:12.916 04:24:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:07:12.916 04:24:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:07:12.916 04:24:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:07:12.916 04:24:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:12.916 04:24:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:07:13.174 04:24:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:07:13.174 04:24:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:07:13.174 04:24:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:07:13.174 04:24:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:13.174 04:24:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:13.174 04:24:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:13.174 04:24:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:13.174 04:24:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:13.174 04:24:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:13.174 04:24:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:13.174 04:24:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:13.174 04:24:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:13.174 1+0 records in 00:07:13.174 1+0 records out 00:07:13.174 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000479083 s, 8.5 MB/s 00:07:13.174 04:24:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:13.174 04:24:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:13.174 04:24:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:13.174 04:24:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:13.174 04:24:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:13.174 04:24:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:13.174 04:24:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:13.174 04:24:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:07:13.435 04:24:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:07:13.435 04:24:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:07:13.435 04:24:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:07:13.435 04:24:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:07:13.435 04:24:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:13.435 04:24:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:13.435 04:24:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:13.435 04:24:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:07:13.435 04:24:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:13.435 04:24:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:13.435 04:24:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:13.435 04:24:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:13.435 1+0 records in 00:07:13.435 1+0 records out 00:07:13.435 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000328178 s, 12.5 MB/s 00:07:13.435 04:24:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:13.435 04:24:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:13.435 04:24:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:13.435 04:24:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:13.435 04:24:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:13.435 04:24:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:13.435 04:24:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:13.435 04:24:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:07:13.435 04:24:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:07:13.435 04:24:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:07:13.435 04:24:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:07:13.435 04:24:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:07:13.435 04:24:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:13.435 04:24:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:13.435 04:24:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:13.435 04:24:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:07:13.435 04:24:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:13.435 04:24:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:13.435 04:24:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:13.435 04:24:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:13.435 1+0 records in 00:07:13.435 1+0 records out 00:07:13.435 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00035779 s, 11.4 MB/s 00:07:13.435 04:24:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:13.435 04:24:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:13.435 04:24:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:13.435 04:24:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:13.435 04:24:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:13.435 04:24:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:13.435 04:24:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:13.435 04:24:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:07:13.694 04:24:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:07:13.694 04:24:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:07:13.694 04:24:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:07:13.694 04:24:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:07:13.694 04:24:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:13.694 04:24:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:13.694 04:24:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:13.694 04:24:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:07:13.694 04:24:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:13.694 04:24:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:13.694 04:24:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:13.694 04:24:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:13.694 1+0 records in 00:07:13.694 1+0 records out 00:07:13.694 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000541252 s, 7.6 MB/s 00:07:13.694 04:24:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:13.694 04:24:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:13.694 04:24:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:13.694 04:24:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:13.694 04:24:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:13.694 04:24:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:13.694 04:24:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:13.694 04:24:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:07:13.952 04:24:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:07:13.952 04:24:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:07:13.952 04:24:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:07:13.952 04:24:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:07:13.952 04:24:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:13.952 04:24:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:13.952 04:24:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:13.952 04:24:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:07:13.952 04:24:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:13.952 04:24:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:13.952 04:24:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:13.952 04:24:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:13.952 1+0 records in 00:07:13.952 1+0 records out 00:07:13.952 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000358326 s, 11.4 MB/s 00:07:13.952 04:24:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:13.952 04:24:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:13.953 04:24:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:13.953 04:24:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:13.953 04:24:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:13.953 04:24:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:13.953 04:24:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:13.953 04:24:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:07:14.210 04:24:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:07:14.210 04:24:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:07:14.210 04:24:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:07:14.210 04:24:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:07:14.210 04:24:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:14.210 04:24:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:14.210 04:24:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:14.210 04:24:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:07:14.210 04:24:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:14.210 04:24:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:14.210 04:24:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:14.210 04:24:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:14.210 1+0 records in 00:07:14.210 1+0 records out 00:07:14.210 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000413513 s, 9.9 MB/s 00:07:14.210 04:24:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:14.210 04:24:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:14.210 04:24:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:14.210 04:24:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:14.210 04:24:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:14.210 04:24:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:14.210 04:24:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:14.210 04:24:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:14.468 04:24:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:07:14.468 { 00:07:14.468 "nbd_device": "/dev/nbd0", 00:07:14.468 "bdev_name": "Nvme0n1" 00:07:14.468 }, 00:07:14.468 { 00:07:14.468 "nbd_device": "/dev/nbd1", 00:07:14.468 "bdev_name": "Nvme1n1" 00:07:14.468 }, 00:07:14.468 { 00:07:14.468 "nbd_device": "/dev/nbd2", 00:07:14.468 "bdev_name": "Nvme2n1" 00:07:14.468 }, 00:07:14.468 { 00:07:14.468 "nbd_device": "/dev/nbd3", 00:07:14.468 "bdev_name": "Nvme2n2" 00:07:14.468 }, 00:07:14.468 { 00:07:14.468 "nbd_device": "/dev/nbd4", 00:07:14.468 "bdev_name": "Nvme2n3" 00:07:14.468 }, 00:07:14.468 { 00:07:14.468 "nbd_device": "/dev/nbd5", 00:07:14.468 "bdev_name": "Nvme3n1" 00:07:14.468 } 00:07:14.468 ]' 00:07:14.468 04:24:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:07:14.468 04:24:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:07:14.468 { 00:07:14.468 "nbd_device": "/dev/nbd0", 00:07:14.468 "bdev_name": "Nvme0n1" 00:07:14.468 }, 00:07:14.468 { 00:07:14.468 "nbd_device": "/dev/nbd1", 00:07:14.468 "bdev_name": "Nvme1n1" 00:07:14.468 }, 00:07:14.468 { 00:07:14.468 "nbd_device": "/dev/nbd2", 00:07:14.468 "bdev_name": "Nvme2n1" 00:07:14.468 }, 00:07:14.468 { 00:07:14.468 "nbd_device": "/dev/nbd3", 00:07:14.468 "bdev_name": "Nvme2n2" 00:07:14.468 }, 00:07:14.468 { 00:07:14.468 "nbd_device": "/dev/nbd4", 00:07:14.468 "bdev_name": "Nvme2n3" 00:07:14.468 }, 00:07:14.468 { 00:07:14.468 "nbd_device": "/dev/nbd5", 00:07:14.468 "bdev_name": "Nvme3n1" 00:07:14.468 } 00:07:14.468 ]' 00:07:14.468 04:24:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:07:14.468 04:24:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:07:14.468 04:24:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:14.468 04:24:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:07:14.468 04:24:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:14.468 04:24:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:14.468 04:24:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:14.468 04:24:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:14.726 04:24:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:14.726 04:24:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:14.726 04:24:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:14.726 04:24:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:14.726 04:24:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:14.726 04:24:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:14.726 04:24:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:14.726 04:24:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:14.726 04:24:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:14.726 04:24:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:14.726 04:24:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:14.726 04:24:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:14.726 04:24:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:14.726 04:24:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:14.726 04:24:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:14.726 04:24:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:14.726 04:24:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:14.726 04:24:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:14.726 04:24:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:14.726 04:24:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:07:14.986 04:24:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:07:14.986 04:24:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:07:14.987 04:24:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:07:14.987 04:24:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:14.987 04:24:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:14.987 04:24:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:07:14.987 04:24:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:14.987 04:24:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:14.987 04:24:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:14.987 04:24:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:07:15.247 04:24:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:07:15.247 04:24:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:07:15.247 04:24:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:07:15.247 04:24:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:15.247 04:24:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:15.247 04:24:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:07:15.247 04:24:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:15.247 04:24:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:15.247 04:24:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:15.247 04:24:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:07:15.507 04:24:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:07:15.507 04:24:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:07:15.507 04:24:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:07:15.507 04:24:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:15.507 04:24:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:15.507 04:24:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:07:15.507 04:24:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:15.507 04:24:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:15.507 04:24:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:15.507 04:24:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:07:15.768 04:24:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:07:15.768 04:24:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:07:15.768 04:24:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:07:15.768 04:24:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:15.768 04:24:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:15.768 04:24:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:07:15.768 04:24:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:15.768 04:24:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:15.768 04:24:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:15.768 04:24:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:15.768 04:24:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:15.768 04:24:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:15.768 04:24:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:15.768 04:24:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:16.029 04:24:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:16.029 04:24:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:16.029 04:24:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:07:16.029 04:24:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:07:16.029 04:24:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:07:16.029 04:24:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:07:16.029 04:24:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:07:16.029 04:24:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:07:16.029 04:24:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:07:16.029 04:24:12 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:07:16.029 04:24:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:16.029 04:24:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:16.029 04:24:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:16.029 04:24:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:16.029 04:24:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:16.029 04:24:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:07:16.029 04:24:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:16.029 04:24:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:16.029 04:24:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:16.029 04:24:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:16.029 04:24:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:16.029 04:24:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:07:16.029 04:24:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:16.029 04:24:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:16.029 04:24:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:07:16.029 /dev/nbd0 00:07:16.029 04:24:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:16.029 04:24:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:16.029 04:24:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:16.029 04:24:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:16.029 04:24:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:16.029 04:24:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:16.029 04:24:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:16.029 04:24:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:16.029 04:24:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:16.029 04:24:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:16.029 04:24:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:16.029 1+0 records in 00:07:16.029 1+0 records out 00:07:16.029 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000459708 s, 8.9 MB/s 00:07:16.029 04:24:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:16.029 04:24:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:16.029 04:24:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:16.029 04:24:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:16.029 04:24:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:16.029 04:24:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:16.029 04:24:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:16.029 04:24:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd1 00:07:16.602 /dev/nbd1 00:07:16.602 04:24:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:16.602 04:24:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:16.602 04:24:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:07:16.602 04:24:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:16.602 04:24:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:16.602 04:24:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:16.602 04:24:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:07:16.602 04:24:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:16.602 04:24:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:16.602 04:24:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:16.602 04:24:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:16.602 1+0 records in 00:07:16.602 1+0 records out 00:07:16.602 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000461131 s, 8.9 MB/s 00:07:16.602 04:24:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:16.602 04:24:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:16.602 04:24:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:16.602 04:24:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:16.602 04:24:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:16.602 04:24:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:16.602 04:24:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:16.603 04:24:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd10 00:07:16.603 /dev/nbd10 00:07:16.603 04:24:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:07:16.603 04:24:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:07:16.603 04:24:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:07:16.603 04:24:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:16.603 04:24:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:16.603 04:24:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:16.603 04:24:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:07:16.603 04:24:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:16.603 04:24:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:16.603 04:24:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:16.603 04:24:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:16.603 1+0 records in 00:07:16.603 1+0 records out 00:07:16.603 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000329846 s, 12.4 MB/s 00:07:16.603 04:24:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:16.603 04:24:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:16.603 04:24:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:16.603 04:24:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:16.603 04:24:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:16.603 04:24:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:16.603 04:24:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:16.603 04:24:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd11 00:07:16.863 /dev/nbd11 00:07:16.863 04:24:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:07:16.863 04:24:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:07:16.863 04:24:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:07:16.863 04:24:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:16.863 04:24:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:16.863 04:24:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:16.863 04:24:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:07:16.863 04:24:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:16.863 04:24:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:16.863 04:24:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:16.864 04:24:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:16.864 1+0 records in 00:07:16.864 1+0 records out 00:07:16.864 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000315369 s, 13.0 MB/s 00:07:16.864 04:24:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:16.864 04:24:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:16.864 04:24:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:16.864 04:24:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:16.864 04:24:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:16.864 04:24:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:16.864 04:24:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:16.864 04:24:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd12 00:07:17.126 /dev/nbd12 00:07:17.126 04:24:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:07:17.126 04:24:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:07:17.126 04:24:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:07:17.126 04:24:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:17.126 04:24:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:17.126 04:24:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:17.126 04:24:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:07:17.126 04:24:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:17.126 04:24:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:17.126 04:24:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:17.126 04:24:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:17.126 1+0 records in 00:07:17.126 1+0 records out 00:07:17.126 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000248683 s, 16.5 MB/s 00:07:17.126 04:24:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:17.126 04:24:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:17.126 04:24:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:17.126 04:24:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:17.126 04:24:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:17.126 04:24:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:17.126 04:24:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:17.126 04:24:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd13 00:07:17.388 /dev/nbd13 00:07:17.388 04:24:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:07:17.388 04:24:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:07:17.388 04:24:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:07:17.388 04:24:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:17.388 04:24:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:17.388 04:24:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:17.388 04:24:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:07:17.388 04:24:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:17.388 04:24:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:17.388 04:24:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:17.388 04:24:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:17.388 1+0 records in 00:07:17.388 1+0 records out 00:07:17.388 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000519477 s, 7.9 MB/s 00:07:17.388 04:24:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:17.388 04:24:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:17.388 04:24:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:17.388 04:24:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:17.388 04:24:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:17.388 04:24:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:17.388 04:24:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:17.388 04:24:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:17.388 04:24:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:17.388 04:24:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:17.649 04:24:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:17.649 { 00:07:17.649 "nbd_device": "/dev/nbd0", 00:07:17.649 "bdev_name": "Nvme0n1" 00:07:17.649 }, 00:07:17.649 { 00:07:17.649 "nbd_device": "/dev/nbd1", 00:07:17.649 "bdev_name": "Nvme1n1" 00:07:17.649 }, 00:07:17.649 { 00:07:17.649 "nbd_device": "/dev/nbd10", 00:07:17.649 "bdev_name": "Nvme2n1" 00:07:17.649 }, 00:07:17.649 { 00:07:17.649 "nbd_device": "/dev/nbd11", 00:07:17.649 "bdev_name": "Nvme2n2" 00:07:17.649 }, 00:07:17.649 { 00:07:17.649 "nbd_device": "/dev/nbd12", 00:07:17.649 "bdev_name": "Nvme2n3" 00:07:17.649 }, 00:07:17.649 { 00:07:17.649 "nbd_device": "/dev/nbd13", 00:07:17.650 "bdev_name": "Nvme3n1" 00:07:17.650 } 00:07:17.650 ]' 00:07:17.650 04:24:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:17.650 { 00:07:17.650 "nbd_device": "/dev/nbd0", 00:07:17.650 "bdev_name": "Nvme0n1" 00:07:17.650 }, 00:07:17.650 { 00:07:17.650 "nbd_device": "/dev/nbd1", 00:07:17.650 "bdev_name": "Nvme1n1" 00:07:17.650 }, 00:07:17.650 { 00:07:17.650 "nbd_device": "/dev/nbd10", 00:07:17.650 "bdev_name": "Nvme2n1" 00:07:17.650 }, 00:07:17.650 { 00:07:17.650 "nbd_device": "/dev/nbd11", 00:07:17.650 "bdev_name": "Nvme2n2" 00:07:17.650 }, 00:07:17.650 { 00:07:17.650 "nbd_device": "/dev/nbd12", 00:07:17.650 "bdev_name": "Nvme2n3" 00:07:17.650 }, 00:07:17.650 { 00:07:17.650 "nbd_device": "/dev/nbd13", 00:07:17.650 "bdev_name": "Nvme3n1" 00:07:17.650 } 00:07:17.650 ]' 00:07:17.650 04:24:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:17.650 04:24:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:17.650 /dev/nbd1 00:07:17.650 /dev/nbd10 00:07:17.650 /dev/nbd11 00:07:17.650 /dev/nbd12 00:07:17.650 /dev/nbd13' 00:07:17.650 04:24:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:17.650 /dev/nbd1 00:07:17.650 /dev/nbd10 00:07:17.650 /dev/nbd11 00:07:17.650 /dev/nbd12 00:07:17.650 /dev/nbd13' 00:07:17.650 04:24:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:17.650 04:24:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:07:17.650 04:24:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:07:17.650 04:24:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:07:17.650 04:24:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:07:17.650 04:24:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:07:17.650 04:24:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:17.650 04:24:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:17.650 04:24:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:17.650 04:24:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:17.650 04:24:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:17.650 04:24:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:07:17.650 256+0 records in 00:07:17.650 256+0 records out 00:07:17.650 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0109233 s, 96.0 MB/s 00:07:17.650 04:24:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:17.650 04:24:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:17.650 256+0 records in 00:07:17.650 256+0 records out 00:07:17.650 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0505983 s, 20.7 MB/s 00:07:17.650 04:24:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:17.650 04:24:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:17.911 256+0 records in 00:07:17.911 256+0 records out 00:07:17.911 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0500568 s, 20.9 MB/s 00:07:17.911 04:24:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:17.911 04:24:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:07:17.911 256+0 records in 00:07:17.911 256+0 records out 00:07:17.911 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0519632 s, 20.2 MB/s 00:07:17.911 04:24:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:17.911 04:24:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:07:17.911 256+0 records in 00:07:17.911 256+0 records out 00:07:17.911 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0517611 s, 20.3 MB/s 00:07:17.911 04:24:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:17.911 04:24:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:07:17.911 256+0 records in 00:07:17.911 256+0 records out 00:07:17.911 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0530919 s, 19.8 MB/s 00:07:17.911 04:24:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:17.911 04:24:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:07:18.173 256+0 records in 00:07:18.173 256+0 records out 00:07:18.173 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0512614 s, 20.5 MB/s 00:07:18.173 04:24:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:07:18.173 04:24:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:18.173 04:24:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:18.173 04:24:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:18.173 04:24:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:18.173 04:24:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:18.173 04:24:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:18.173 04:24:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:18.173 04:24:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:07:18.173 04:24:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:18.173 04:24:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:07:18.173 04:24:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:18.173 04:24:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:07:18.173 04:24:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:18.173 04:24:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:07:18.173 04:24:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:18.173 04:24:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:07:18.173 04:24:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:18.173 04:24:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:07:18.173 04:24:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:18.173 04:24:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:07:18.173 04:24:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:18.173 04:24:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:18.173 04:24:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:18.173 04:24:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:18.173 04:24:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:18.173 04:24:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:18.173 04:24:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:18.173 04:24:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:18.173 04:24:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:18.173 04:24:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:18.173 04:24:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:18.173 04:24:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:18.173 04:24:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:18.435 04:24:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:18.435 04:24:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:18.435 04:24:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:18.435 04:24:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:18.435 04:24:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:18.435 04:24:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:18.435 04:24:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:18.435 04:24:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:18.435 04:24:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:18.435 04:24:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:18.435 04:24:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:18.435 04:24:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:18.435 04:24:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:07:18.694 04:24:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:07:18.694 04:24:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:07:18.694 04:24:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:07:18.694 04:24:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:18.694 04:24:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:18.694 04:24:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:07:18.694 04:24:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:18.694 04:24:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:18.694 04:24:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:18.694 04:24:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:07:18.969 04:24:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:07:18.969 04:24:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:07:18.969 04:24:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:07:18.969 04:24:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:18.969 04:24:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:18.969 04:24:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:07:18.969 04:24:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:18.969 04:24:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:18.969 04:24:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:18.969 04:24:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:07:19.241 04:24:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:07:19.241 04:24:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:07:19.241 04:24:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:07:19.241 04:24:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:19.241 04:24:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:19.241 04:24:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:07:19.241 04:24:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:19.241 04:24:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:19.241 04:24:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:19.241 04:24:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:07:19.241 04:24:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:07:19.241 04:24:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:07:19.241 04:24:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:07:19.241 04:24:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:19.241 04:24:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:19.241 04:24:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:07:19.241 04:24:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:19.241 04:24:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:19.241 04:24:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:19.241 04:24:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:19.241 04:24:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:19.499 04:24:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:19.499 04:24:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:19.499 04:24:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:19.499 04:24:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:19.499 04:24:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:07:19.499 04:24:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:19.499 04:24:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:07:19.499 04:24:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:07:19.499 04:24:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:07:19.499 04:24:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:07:19.499 04:24:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:19.499 04:24:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:07:19.499 04:24:16 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:07:19.499 04:24:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:19.499 04:24:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:07:19.499 04:24:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:07:19.756 malloc_lvol_verify 00:07:19.756 04:24:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:07:20.014 161f2f43-9b9b-49c8-baef-dfb3a7b422cc 00:07:20.014 04:24:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:07:20.275 c870facb-26df-4cdb-bb5c-2e9ceb2c0ca5 00:07:20.275 04:24:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:07:20.275 /dev/nbd0 00:07:20.275 04:24:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:07:20.275 04:24:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:07:20.275 04:24:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:07:20.275 04:24:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:07:20.275 04:24:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:07:20.275 mke2fs 1.47.0 (5-Feb-2023) 00:07:20.275 Discarding device blocks: 0/4096 done 00:07:20.275 Creating filesystem with 4096 1k blocks and 1024 inodes 00:07:20.275 00:07:20.275 Allocating group tables: 0/1 done 00:07:20.275 Writing inode tables: 0/1 done 00:07:20.275 Creating journal (1024 blocks): done 00:07:20.275 Writing superblocks and filesystem accounting information: 0/1 done 00:07:20.275 00:07:20.275 04:24:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:07:20.275 04:24:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:20.275 04:24:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:07:20.275 04:24:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:20.275 04:24:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:20.275 04:24:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:20.275 04:24:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:20.536 04:24:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:20.536 04:24:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:20.536 04:24:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:20.536 04:24:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:20.536 04:24:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:20.536 04:24:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:20.536 04:24:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:20.536 04:24:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:20.536 04:24:17 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 60000 00:07:20.536 04:24:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 60000 ']' 00:07:20.536 04:24:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 60000 00:07:20.536 04:24:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:07:20.536 04:24:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:20.536 04:24:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60000 00:07:20.536 04:24:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:20.536 04:24:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:20.536 killing process with pid 60000 00:07:20.537 04:24:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60000' 00:07:20.537 04:24:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@973 -- # kill 60000 00:07:20.537 04:24:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@978 -- # wait 60000 00:07:21.477 04:24:17 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:07:21.477 00:07:21.477 real 0m9.210s 00:07:21.477 user 0m13.511s 00:07:21.477 sys 0m2.834s 00:07:21.477 04:24:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:21.477 04:24:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:07:21.477 ************************************ 00:07:21.477 END TEST bdev_nbd 00:07:21.477 ************************************ 00:07:21.477 04:24:17 blockdev_nvme -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:07:21.477 04:24:17 blockdev_nvme -- bdev/blockdev.sh@801 -- # '[' nvme = nvme ']' 00:07:21.477 skipping fio tests on NVMe due to multi-ns failures. 00:07:21.477 04:24:17 blockdev_nvme -- bdev/blockdev.sh@803 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:07:21.477 04:24:17 blockdev_nvme -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:07:21.477 04:24:17 blockdev_nvme -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:07:21.477 04:24:17 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:07:21.477 04:24:17 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:21.477 04:24:17 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:21.477 ************************************ 00:07:21.477 START TEST bdev_verify 00:07:21.477 ************************************ 00:07:21.477 04:24:17 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:07:21.477 [2024-11-27 04:24:17.802135] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:07:21.477 [2024-11-27 04:24:17.802249] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60364 ] 00:07:21.477 [2024-11-27 04:24:17.956066] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:21.477 [2024-11-27 04:24:18.036656] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.477 [2024-11-27 04:24:18.036686] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:22.057 Running I/O for 5 seconds... 00:07:24.377 24896.00 IOPS, 97.25 MiB/s [2024-11-27T04:24:21.909Z] 24448.00 IOPS, 95.50 MiB/s [2024-11-27T04:24:22.853Z] 23317.33 IOPS, 91.08 MiB/s [2024-11-27T04:24:23.796Z] 22864.00 IOPS, 89.31 MiB/s [2024-11-27T04:24:23.796Z] 22604.80 IOPS, 88.30 MiB/s 00:07:27.209 Latency(us) 00:07:27.209 [2024-11-27T04:24:23.796Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:27.209 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:27.209 Verification LBA range: start 0x0 length 0xbd0bd 00:07:27.209 Nvme0n1 : 5.05 1887.72 7.37 0.00 0.00 67488.26 6503.19 61301.37 00:07:27.209 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:27.209 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:07:27.209 Nvme0n1 : 5.04 1827.54 7.14 0.00 0.00 69767.60 12703.90 70980.53 00:07:27.209 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:27.209 Verification LBA range: start 0x0 length 0xa0000 00:07:27.209 Nvme1n1 : 5.07 1895.31 7.40 0.00 0.00 67324.66 10989.88 58478.28 00:07:27.209 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:27.209 Verification LBA range: start 0xa0000 length 0xa0000 00:07:27.209 Nvme1n1 : 5.04 1827.02 7.14 0.00 0.00 69663.92 15930.29 67350.84 00:07:27.209 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:27.209 Verification LBA range: start 0x0 length 0x80000 00:07:27.209 Nvme2n1 : 5.07 1894.81 7.40 0.00 0.00 67213.32 11141.12 58074.98 00:07:27.209 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:27.209 Verification LBA range: start 0x80000 length 0x80000 00:07:27.209 Nvme2n1 : 5.06 1834.04 7.16 0.00 0.00 69331.92 4486.70 66544.25 00:07:27.209 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:27.209 Verification LBA range: start 0x0 length 0x80000 00:07:27.209 Nvme2n2 : 5.07 1894.32 7.40 0.00 0.00 67102.25 10687.41 57268.38 00:07:27.209 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:27.209 Verification LBA range: start 0x80000 length 0x80000 00:07:27.209 Nvme2n2 : 5.06 1833.42 7.16 0.00 0.00 69212.73 4814.38 66947.54 00:07:27.209 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:27.209 Verification LBA range: start 0x0 length 0x80000 00:07:27.209 Nvme2n3 : 5.07 1893.80 7.40 0.00 0.00 67001.57 10788.23 60091.47 00:07:27.209 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:27.209 Verification LBA range: start 0x80000 length 0x80000 00:07:27.209 Nvme2n3 : 5.07 1842.71 7.20 0.00 0.00 68816.96 6553.60 69770.63 00:07:27.209 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:27.209 Verification LBA range: start 0x0 length 0x20000 00:07:27.209 Nvme3n1 : 5.07 1893.22 7.40 0.00 0.00 66904.16 6755.25 63721.16 00:07:27.209 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:27.209 Verification LBA range: start 0x20000 length 0x20000 00:07:27.209 Nvme3n1 : 5.07 1841.40 7.19 0.00 0.00 68736.94 7259.37 71383.83 00:07:27.209 [2024-11-27T04:24:23.796Z] =================================================================================================================== 00:07:27.209 [2024-11-27T04:24:23.796Z] Total : 22365.32 87.36 0.00 0.00 68195.38 4486.70 71383.83 00:07:28.595 00:07:28.595 real 0m7.201s 00:07:28.595 user 0m13.513s 00:07:28.595 sys 0m0.194s 00:07:28.595 ************************************ 00:07:28.595 END TEST bdev_verify 00:07:28.595 ************************************ 00:07:28.595 04:24:24 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:28.595 04:24:24 blockdev_nvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:07:28.596 04:24:24 blockdev_nvme -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:07:28.596 04:24:24 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:07:28.596 04:24:24 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:28.596 04:24:24 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:28.596 ************************************ 00:07:28.596 START TEST bdev_verify_big_io 00:07:28.596 ************************************ 00:07:28.596 04:24:24 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:07:28.596 [2024-11-27 04:24:25.067375] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:07:28.596 [2024-11-27 04:24:25.067521] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60462 ] 00:07:28.857 [2024-11-27 04:24:25.233588] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:28.857 [2024-11-27 04:24:25.359598] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:28.857 [2024-11-27 04:24:25.359690] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:29.854 Running I/O for 5 seconds... 00:07:33.155 23.00 IOPS, 1.44 MiB/s [2024-11-27T04:24:32.287Z] 1140.50 IOPS, 71.28 MiB/s [2024-11-27T04:24:32.287Z] 2129.00 IOPS, 133.06 MiB/s 00:07:35.700 Latency(us) 00:07:35.700 [2024-11-27T04:24:32.287Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:35.700 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:35.700 Verification LBA range: start 0x0 length 0xbd0b 00:07:35.700 Nvme0n1 : 5.67 124.08 7.75 0.00 0.00 996097.22 22282.24 1064707.94 00:07:35.700 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:35.700 Verification LBA range: start 0xbd0b length 0xbd0b 00:07:35.700 Nvme0n1 : 5.73 122.84 7.68 0.00 0.00 1005145.51 26416.05 1058255.16 00:07:35.700 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:35.700 Verification LBA range: start 0x0 length 0xa000 00:07:35.700 Nvme1n1 : 5.68 124.04 7.75 0.00 0.00 965173.42 87919.06 896935.78 00:07:35.700 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:35.700 Verification LBA range: start 0xa000 length 0xa000 00:07:35.700 Nvme1n1 : 5.73 122.79 7.67 0.00 0.00 971987.67 109697.18 896935.78 00:07:35.701 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:35.701 Verification LBA range: start 0x0 length 0x8000 00:07:35.701 Nvme2n1 : 5.81 127.52 7.97 0.00 0.00 908472.78 55655.19 916294.10 00:07:35.701 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:35.701 Verification LBA range: start 0x8000 length 0x8000 00:07:35.701 Nvme2n1 : 5.86 127.14 7.95 0.00 0.00 915161.89 93565.24 864671.90 00:07:35.701 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:35.701 Verification LBA range: start 0x0 length 0x8000 00:07:35.701 Nvme2n2 : 5.81 132.13 8.26 0.00 0.00 858056.86 77433.30 916294.10 00:07:35.701 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:35.701 Verification LBA range: start 0x8000 length 0x8000 00:07:35.701 Nvme2n2 : 5.86 131.02 8.19 0.00 0.00 866239.15 30449.03 877577.45 00:07:35.701 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:35.701 Verification LBA range: start 0x0 length 0x8000 00:07:35.701 Nvme2n3 : 5.86 141.87 8.87 0.00 0.00 777961.10 17039.36 935652.43 00:07:35.701 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:35.701 Verification LBA range: start 0x8000 length 0x8000 00:07:35.701 Nvme2n3 : 5.90 127.44 7.96 0.00 0.00 862083.18 36296.86 1897115.96 00:07:35.701 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:35.701 Verification LBA range: start 0x0 length 0x2000 00:07:35.701 Nvme3n1 : 5.95 161.30 10.08 0.00 0.00 664431.76 79.95 955010.76 00:07:35.701 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:35.701 Verification LBA range: start 0x2000 length 0x2000 00:07:35.701 Nvme3n1 : 5.96 145.31 9.08 0.00 0.00 733648.19 746.73 1703532.70 00:07:35.701 [2024-11-27T04:24:32.288Z] =================================================================================================================== 00:07:35.701 [2024-11-27T04:24:32.288Z] Total : 1587.47 99.22 0.00 0.00 867280.88 79.95 1897115.96 00:07:37.089 00:07:37.089 real 0m8.436s 00:07:37.089 user 0m15.826s 00:07:37.089 sys 0m0.312s 00:07:37.089 04:24:33 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:37.089 ************************************ 00:07:37.089 END TEST bdev_verify_big_io 00:07:37.089 ************************************ 00:07:37.089 04:24:33 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:07:37.089 04:24:33 blockdev_nvme -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:37.089 04:24:33 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:07:37.089 04:24:33 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:37.089 04:24:33 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:37.089 ************************************ 00:07:37.089 START TEST bdev_write_zeroes 00:07:37.089 ************************************ 00:07:37.089 04:24:33 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:37.089 [2024-11-27 04:24:33.553451] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:07:37.089 [2024-11-27 04:24:33.553540] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60574 ] 00:07:37.351 [2024-11-27 04:24:33.700050] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:37.351 [2024-11-27 04:24:33.776704] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:37.922 Running I/O for 1 seconds... 00:07:38.868 62175.00 IOPS, 242.87 MiB/s 00:07:38.868 Latency(us) 00:07:38.868 [2024-11-27T04:24:35.455Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:38.868 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:38.868 Nvme0n1 : 1.02 10305.21 40.25 0.00 0.00 12395.32 4562.31 34683.67 00:07:38.868 Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:38.868 Nvme1n1 : 1.02 10325.84 40.34 0.00 0.00 12355.58 9074.22 26819.35 00:07:38.868 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:38.868 Nvme2n1 : 1.02 10314.13 40.29 0.00 0.00 12337.27 9175.04 25206.15 00:07:38.868 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:38.868 Nvme2n2 : 1.02 10302.52 40.24 0.00 0.00 12334.66 9175.04 25206.15 00:07:38.868 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:38.868 Nvme2n3 : 1.03 10290.72 40.20 0.00 0.00 12326.51 9023.80 26819.35 00:07:38.868 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:38.868 Nvme3n1 : 1.03 10279.09 40.15 0.00 0.00 12317.14 8519.68 27424.30 00:07:38.868 [2024-11-27T04:24:35.455Z] =================================================================================================================== 00:07:38.868 [2024-11-27T04:24:35.455Z] Total : 61817.52 241.47 0.00 0.00 12344.39 4562.31 34683.67 00:07:39.812 00:07:39.813 real 0m2.650s 00:07:39.813 user 0m2.355s 00:07:39.813 sys 0m0.176s 00:07:39.813 04:24:36 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:39.813 04:24:36 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:07:39.813 ************************************ 00:07:39.813 END TEST bdev_write_zeroes 00:07:39.813 ************************************ 00:07:39.813 04:24:36 blockdev_nvme -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:39.813 04:24:36 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:07:39.813 04:24:36 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:39.813 04:24:36 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:39.813 ************************************ 00:07:39.813 START TEST bdev_json_nonenclosed 00:07:39.813 ************************************ 00:07:39.813 04:24:36 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:39.813 [2024-11-27 04:24:36.296067] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:07:39.813 [2024-11-27 04:24:36.296243] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60627 ] 00:07:40.074 [2024-11-27 04:24:36.460679] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:40.074 [2024-11-27 04:24:36.582764] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:40.074 [2024-11-27 04:24:36.582856] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:07:40.074 [2024-11-27 04:24:36.582876] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:07:40.074 [2024-11-27 04:24:36.582886] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:40.336 00:07:40.336 real 0m0.554s 00:07:40.336 user 0m0.333s 00:07:40.336 sys 0m0.114s 00:07:40.336 ************************************ 00:07:40.336 END TEST bdev_json_nonenclosed 00:07:40.336 ************************************ 00:07:40.336 04:24:36 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:40.336 04:24:36 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:07:40.336 04:24:36 blockdev_nvme -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:40.336 04:24:36 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:07:40.336 04:24:36 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:40.336 04:24:36 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:40.336 ************************************ 00:07:40.336 START TEST bdev_json_nonarray 00:07:40.336 ************************************ 00:07:40.336 04:24:36 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:40.336 [2024-11-27 04:24:36.916522] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:07:40.336 [2024-11-27 04:24:36.916657] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60647 ] 00:07:40.598 [2024-11-27 04:24:37.077934] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:40.858 [2024-11-27 04:24:37.199457] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:40.858 [2024-11-27 04:24:37.199576] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:07:40.858 [2024-11-27 04:24:37.199596] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:07:40.858 [2024-11-27 04:24:37.199607] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:40.858 00:07:40.858 real 0m0.545s 00:07:40.858 user 0m0.330s 00:07:40.858 sys 0m0.110s 00:07:40.858 ************************************ 00:07:40.858 END TEST bdev_json_nonarray 00:07:40.858 ************************************ 00:07:40.858 04:24:37 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:40.858 04:24:37 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:07:41.120 04:24:37 blockdev_nvme -- bdev/blockdev.sh@824 -- # [[ nvme == bdev ]] 00:07:41.120 04:24:37 blockdev_nvme -- bdev/blockdev.sh@832 -- # [[ nvme == gpt ]] 00:07:41.120 04:24:37 blockdev_nvme -- bdev/blockdev.sh@836 -- # [[ nvme == crypto_sw ]] 00:07:41.120 04:24:37 blockdev_nvme -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:07:41.120 04:24:37 blockdev_nvme -- bdev/blockdev.sh@849 -- # cleanup 00:07:41.120 04:24:37 blockdev_nvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:07:41.120 04:24:37 blockdev_nvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:41.120 04:24:37 blockdev_nvme -- bdev/blockdev.sh@26 -- # [[ nvme == rbd ]] 00:07:41.120 04:24:37 blockdev_nvme -- bdev/blockdev.sh@30 -- # [[ nvme == daos ]] 00:07:41.120 04:24:37 blockdev_nvme -- bdev/blockdev.sh@34 -- # [[ nvme = \g\p\t ]] 00:07:41.120 04:24:37 blockdev_nvme -- bdev/blockdev.sh@40 -- # [[ nvme == xnvme ]] 00:07:41.120 00:07:41.120 real 0m35.387s 00:07:41.120 user 0m55.010s 00:07:41.120 sys 0m4.944s 00:07:41.120 04:24:37 blockdev_nvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:41.120 ************************************ 00:07:41.120 END TEST blockdev_nvme 00:07:41.120 ************************************ 00:07:41.120 04:24:37 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:41.120 04:24:37 -- spdk/autotest.sh@209 -- # uname -s 00:07:41.120 04:24:37 -- spdk/autotest.sh@209 -- # [[ Linux == Linux ]] 00:07:41.120 04:24:37 -- spdk/autotest.sh@210 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:07:41.120 04:24:37 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:41.120 04:24:37 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:41.120 04:24:37 -- common/autotest_common.sh@10 -- # set +x 00:07:41.120 ************************************ 00:07:41.120 START TEST blockdev_nvme_gpt 00:07:41.120 ************************************ 00:07:41.120 04:24:37 blockdev_nvme_gpt -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:07:41.120 * Looking for test storage... 00:07:41.120 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:07:41.120 04:24:37 blockdev_nvme_gpt -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:41.120 04:24:37 blockdev_nvme_gpt -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:41.120 04:24:37 blockdev_nvme_gpt -- common/autotest_common.sh@1693 -- # lcov --version 00:07:41.120 04:24:37 blockdev_nvme_gpt -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:41.120 04:24:37 blockdev_nvme_gpt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:41.120 04:24:37 blockdev_nvme_gpt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:41.120 04:24:37 blockdev_nvme_gpt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:41.120 04:24:37 blockdev_nvme_gpt -- scripts/common.sh@336 -- # IFS=.-: 00:07:41.120 04:24:37 blockdev_nvme_gpt -- scripts/common.sh@336 -- # read -ra ver1 00:07:41.120 04:24:37 blockdev_nvme_gpt -- scripts/common.sh@337 -- # IFS=.-: 00:07:41.120 04:24:37 blockdev_nvme_gpt -- scripts/common.sh@337 -- # read -ra ver2 00:07:41.120 04:24:37 blockdev_nvme_gpt -- scripts/common.sh@338 -- # local 'op=<' 00:07:41.120 04:24:37 blockdev_nvme_gpt -- scripts/common.sh@340 -- # ver1_l=2 00:07:41.120 04:24:37 blockdev_nvme_gpt -- scripts/common.sh@341 -- # ver2_l=1 00:07:41.120 04:24:37 blockdev_nvme_gpt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:41.120 04:24:37 blockdev_nvme_gpt -- scripts/common.sh@344 -- # case "$op" in 00:07:41.120 04:24:37 blockdev_nvme_gpt -- scripts/common.sh@345 -- # : 1 00:07:41.120 04:24:37 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:41.120 04:24:37 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:41.120 04:24:37 blockdev_nvme_gpt -- scripts/common.sh@365 -- # decimal 1 00:07:41.120 04:24:37 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=1 00:07:41.120 04:24:37 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:41.120 04:24:37 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 1 00:07:41.120 04:24:37 blockdev_nvme_gpt -- scripts/common.sh@365 -- # ver1[v]=1 00:07:41.120 04:24:37 blockdev_nvme_gpt -- scripts/common.sh@366 -- # decimal 2 00:07:41.120 04:24:37 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=2 00:07:41.120 04:24:37 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:41.121 04:24:37 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 2 00:07:41.121 04:24:37 blockdev_nvme_gpt -- scripts/common.sh@366 -- # ver2[v]=2 00:07:41.121 04:24:37 blockdev_nvme_gpt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:41.121 04:24:37 blockdev_nvme_gpt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:41.121 04:24:37 blockdev_nvme_gpt -- scripts/common.sh@368 -- # return 0 00:07:41.121 04:24:37 blockdev_nvme_gpt -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:41.121 04:24:37 blockdev_nvme_gpt -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:41.121 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:41.121 --rc genhtml_branch_coverage=1 00:07:41.121 --rc genhtml_function_coverage=1 00:07:41.121 --rc genhtml_legend=1 00:07:41.121 --rc geninfo_all_blocks=1 00:07:41.121 --rc geninfo_unexecuted_blocks=1 00:07:41.121 00:07:41.121 ' 00:07:41.121 04:24:37 blockdev_nvme_gpt -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:41.121 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:41.121 --rc genhtml_branch_coverage=1 00:07:41.121 --rc genhtml_function_coverage=1 00:07:41.121 --rc genhtml_legend=1 00:07:41.121 --rc geninfo_all_blocks=1 00:07:41.121 --rc geninfo_unexecuted_blocks=1 00:07:41.121 00:07:41.121 ' 00:07:41.121 04:24:37 blockdev_nvme_gpt -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:41.121 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:41.121 --rc genhtml_branch_coverage=1 00:07:41.121 --rc genhtml_function_coverage=1 00:07:41.121 --rc genhtml_legend=1 00:07:41.121 --rc geninfo_all_blocks=1 00:07:41.121 --rc geninfo_unexecuted_blocks=1 00:07:41.121 00:07:41.121 ' 00:07:41.121 04:24:37 blockdev_nvme_gpt -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:41.121 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:41.121 --rc genhtml_branch_coverage=1 00:07:41.121 --rc genhtml_function_coverage=1 00:07:41.121 --rc genhtml_legend=1 00:07:41.121 --rc geninfo_all_blocks=1 00:07:41.121 --rc geninfo_unexecuted_blocks=1 00:07:41.121 00:07:41.121 ' 00:07:41.121 04:24:37 blockdev_nvme_gpt -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:07:41.121 04:24:37 blockdev_nvme_gpt -- bdev/nbd_common.sh@6 -- # set -e 00:07:41.121 04:24:37 blockdev_nvme_gpt -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:07:41.121 04:24:37 blockdev_nvme_gpt -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:41.121 04:24:37 blockdev_nvme_gpt -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:07:41.121 04:24:37 blockdev_nvme_gpt -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:07:41.121 04:24:37 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:07:41.121 04:24:37 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:07:41.121 04:24:37 blockdev_nvme_gpt -- bdev/blockdev.sh@20 -- # : 00:07:41.121 04:24:37 blockdev_nvme_gpt -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:07:41.121 04:24:37 blockdev_nvme_gpt -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:07:41.121 04:24:37 blockdev_nvme_gpt -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:07:41.121 04:24:37 blockdev_nvme_gpt -- bdev/blockdev.sh@711 -- # uname -s 00:07:41.121 04:24:37 blockdev_nvme_gpt -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:07:41.121 04:24:37 blockdev_nvme_gpt -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:07:41.121 04:24:37 blockdev_nvme_gpt -- bdev/blockdev.sh@719 -- # test_type=gpt 00:07:41.121 04:24:37 blockdev_nvme_gpt -- bdev/blockdev.sh@720 -- # crypto_device= 00:07:41.121 04:24:37 blockdev_nvme_gpt -- bdev/blockdev.sh@721 -- # dek= 00:07:41.121 04:24:37 blockdev_nvme_gpt -- bdev/blockdev.sh@722 -- # env_ctx= 00:07:41.121 04:24:37 blockdev_nvme_gpt -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:07:41.121 04:24:37 blockdev_nvme_gpt -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:07:41.121 04:24:37 blockdev_nvme_gpt -- bdev/blockdev.sh@727 -- # [[ gpt == bdev ]] 00:07:41.121 04:24:37 blockdev_nvme_gpt -- bdev/blockdev.sh@727 -- # [[ gpt == crypto_* ]] 00:07:41.121 04:24:37 blockdev_nvme_gpt -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:07:41.121 04:24:37 blockdev_nvme_gpt -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=60731 00:07:41.121 04:24:37 blockdev_nvme_gpt -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:07:41.121 04:24:37 blockdev_nvme_gpt -- bdev/blockdev.sh@49 -- # waitforlisten 60731 00:07:41.121 04:24:37 blockdev_nvme_gpt -- common/autotest_common.sh@835 -- # '[' -z 60731 ']' 00:07:41.121 04:24:37 blockdev_nvme_gpt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:41.121 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:41.121 04:24:37 blockdev_nvme_gpt -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:41.121 04:24:37 blockdev_nvme_gpt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:41.121 04:24:37 blockdev_nvme_gpt -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:41.121 04:24:37 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:41.121 04:24:37 blockdev_nvme_gpt -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:07:41.382 [2024-11-27 04:24:37.767377] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:07:41.382 [2024-11-27 04:24:37.767520] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60731 ] 00:07:41.382 [2024-11-27 04:24:37.931885] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:41.642 [2024-11-27 04:24:38.060936] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:42.214 04:24:38 blockdev_nvme_gpt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:42.214 04:24:38 blockdev_nvme_gpt -- common/autotest_common.sh@868 -- # return 0 00:07:42.214 04:24:38 blockdev_nvme_gpt -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:07:42.214 04:24:38 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # setup_gpt_conf 00:07:42.214 04:24:38 blockdev_nvme_gpt -- bdev/blockdev.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:07:42.793 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:42.793 Waiting for block devices as requested 00:07:42.793 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:07:42.793 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:07:43.059 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:07:43.059 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:07:48.393 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:07:48.393 04:24:44 blockdev_nvme_gpt -- bdev/blockdev.sh@105 -- # get_zoned_devs 00:07:48.393 04:24:44 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:07:48.393 04:24:44 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:07:48.393 04:24:44 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # local nvme bdf 00:07:48.393 04:24:44 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:07:48.393 04:24:44 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:07:48.393 04:24:44 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:07:48.393 04:24:44 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:07:48.393 04:24:44 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:48.393 04:24:44 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:07:48.393 04:24:44 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:07:48.393 04:24:44 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:07:48.393 04:24:44 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:07:48.393 04:24:44 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:48.393 04:24:44 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:07:48.393 04:24:44 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n1 00:07:48.393 04:24:44 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:07:48.393 04:24:44 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:07:48.393 04:24:44 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:48.393 04:24:44 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:07:48.393 04:24:44 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n2 00:07:48.393 04:24:44 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n2 00:07:48.394 04:24:44 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:07:48.394 04:24:44 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:48.394 04:24:44 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:07:48.394 04:24:44 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n3 00:07:48.394 04:24:44 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n3 00:07:48.394 04:24:44 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:07:48.394 04:24:44 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:48.394 04:24:44 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:07:48.394 04:24:44 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3c3n1 00:07:48.394 04:24:44 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme3c3n1 00:07:48.394 04:24:44 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:07:48.394 04:24:44 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:48.394 04:24:44 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:07:48.394 04:24:44 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3n1 00:07:48.394 04:24:44 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme3n1 00:07:48.394 04:24:44 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:07:48.394 04:24:44 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:48.394 04:24:44 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # nvme_devs=('/sys/block/nvme0n1' '/sys/block/nvme1n1' '/sys/block/nvme2n1' '/sys/block/nvme2n2' '/sys/block/nvme2n3' '/sys/block/nvme3n1') 00:07:48.394 04:24:44 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # local nvme_devs nvme_dev 00:07:48.394 04:24:44 blockdev_nvme_gpt -- bdev/blockdev.sh@107 -- # gpt_nvme= 00:07:48.394 04:24:44 blockdev_nvme_gpt -- bdev/blockdev.sh@109 -- # for nvme_dev in "${nvme_devs[@]}" 00:07:48.394 04:24:44 blockdev_nvme_gpt -- bdev/blockdev.sh@110 -- # [[ -z '' ]] 00:07:48.394 04:24:44 blockdev_nvme_gpt -- bdev/blockdev.sh@111 -- # dev=/dev/nvme0n1 00:07:48.394 04:24:44 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # parted /dev/nvme0n1 -ms print 00:07:48.394 04:24:44 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # pt='Error: /dev/nvme0n1: unrecognised disk label 00:07:48.394 BYT; 00:07:48.394 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 00:07:48.394 04:24:44 blockdev_nvme_gpt -- bdev/blockdev.sh@113 -- # [[ Error: /dev/nvme0n1: unrecognised disk label 00:07:48.394 BYT; 00:07:48.394 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\0\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:07:48.394 04:24:44 blockdev_nvme_gpt -- bdev/blockdev.sh@114 -- # gpt_nvme=/dev/nvme0n1 00:07:48.394 04:24:44 blockdev_nvme_gpt -- bdev/blockdev.sh@115 -- # break 00:07:48.394 04:24:44 blockdev_nvme_gpt -- bdev/blockdev.sh@118 -- # [[ -n /dev/nvme0n1 ]] 00:07:48.394 04:24:44 blockdev_nvme_gpt -- bdev/blockdev.sh@123 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:07:48.394 04:24:44 blockdev_nvme_gpt -- bdev/blockdev.sh@124 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:07:48.394 04:24:44 blockdev_nvme_gpt -- bdev/blockdev.sh@127 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:07:48.394 04:24:44 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # get_spdk_gpt_old 00:07:48.394 04:24:44 blockdev_nvme_gpt -- scripts/common.sh@411 -- # local spdk_guid 00:07:48.394 04:24:44 blockdev_nvme_gpt -- scripts/common.sh@413 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:07:48.394 04:24:44 blockdev_nvme_gpt -- scripts/common.sh@415 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:07:48.394 04:24:44 blockdev_nvme_gpt -- scripts/common.sh@416 -- # IFS='()' 00:07:48.394 04:24:44 blockdev_nvme_gpt -- scripts/common.sh@416 -- # read -r _ spdk_guid _ 00:07:48.394 04:24:44 blockdev_nvme_gpt -- scripts/common.sh@416 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:07:48.394 04:24:44 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:07:48.394 04:24:44 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:07:48.394 04:24:44 blockdev_nvme_gpt -- scripts/common.sh@419 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:07:48.394 04:24:44 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:07:48.394 04:24:44 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # get_spdk_gpt 00:07:48.394 04:24:44 blockdev_nvme_gpt -- scripts/common.sh@423 -- # local spdk_guid 00:07:48.394 04:24:44 blockdev_nvme_gpt -- scripts/common.sh@425 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:07:48.394 04:24:44 blockdev_nvme_gpt -- scripts/common.sh@427 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:07:48.394 04:24:44 blockdev_nvme_gpt -- scripts/common.sh@428 -- # IFS='()' 00:07:48.394 04:24:44 blockdev_nvme_gpt -- scripts/common.sh@428 -- # read -r _ spdk_guid _ 00:07:48.394 04:24:44 blockdev_nvme_gpt -- scripts/common.sh@428 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:07:48.394 04:24:44 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:07:48.394 04:24:44 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:07:48.394 04:24:44 blockdev_nvme_gpt -- scripts/common.sh@431 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:07:48.394 04:24:44 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:07:48.394 04:24:44 blockdev_nvme_gpt -- bdev/blockdev.sh@131 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme0n1 00:07:49.335 The operation has completed successfully. 00:07:49.335 04:24:45 blockdev_nvme_gpt -- bdev/blockdev.sh@132 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme0n1 00:07:50.313 The operation has completed successfully. 00:07:50.313 04:24:46 blockdev_nvme_gpt -- bdev/blockdev.sh@133 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:07:50.573 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:51.145 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:07:51.145 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:07:51.145 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:07:51.145 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:07:51.145 04:24:47 blockdev_nvme_gpt -- bdev/blockdev.sh@134 -- # rpc_cmd bdev_get_bdevs 00:07:51.145 04:24:47 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.145 04:24:47 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:51.145 [] 00:07:51.145 04:24:47 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.145 04:24:47 blockdev_nvme_gpt -- bdev/blockdev.sh@135 -- # setup_nvme_conf 00:07:51.145 04:24:47 blockdev_nvme_gpt -- bdev/blockdev.sh@81 -- # local json 00:07:51.145 04:24:47 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # mapfile -t json 00:07:51.145 04:24:47 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:51.145 04:24:47 blockdev_nvme_gpt -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:07:51.145 04:24:47 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.145 04:24:47 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:51.717 04:24:47 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.717 04:24:47 blockdev_nvme_gpt -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:07:51.717 04:24:47 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.717 04:24:47 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:51.717 04:24:47 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.717 04:24:47 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # cat 00:07:51.717 04:24:47 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:07:51.717 04:24:48 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.717 04:24:48 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:51.717 04:24:48 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.717 04:24:48 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:07:51.717 04:24:48 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.717 04:24:48 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:51.717 04:24:48 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.717 04:24:48 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:07:51.717 04:24:48 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.717 04:24:48 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:51.717 04:24:48 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.717 04:24:48 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:07:51.717 04:24:48 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:07:51.717 04:24:48 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.717 04:24:48 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:51.717 04:24:48 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:07:51.717 04:24:48 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.717 04:24:48 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:07:51.717 04:24:48 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # jq -r .name 00:07:51.718 04:24:48 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "286f5df8-3432-4c8b-b2b3-4e7af2ce909f"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "286f5df8-3432-4c8b-b2b3-4e7af2ce909f",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655104,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme1n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655103,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 655360,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "fee75449-4714-4a72-bc71-400678a31891"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "fee75449-4714-4a72-bc71-400678a31891",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "ffd1ef42-cb31-4b33-b088-28b360edb005"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "ffd1ef42-cb31-4b33-b088-28b360edb005",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "095554fa-df75-48b6-a3d9-0fa66a4c9e66"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "095554fa-df75-48b6-a3d9-0fa66a4c9e66",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "3b3cf890-a6ad-403b-992b-c843214ea27a"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "3b3cf890-a6ad-403b-992b-c843214ea27a",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:07:51.718 04:24:48 blockdev_nvme_gpt -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:07:51.718 04:24:48 blockdev_nvme_gpt -- bdev/blockdev.sh@789 -- # hello_world_bdev=Nvme0n1 00:07:51.718 04:24:48 blockdev_nvme_gpt -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:07:51.718 04:24:48 blockdev_nvme_gpt -- bdev/blockdev.sh@791 -- # killprocess 60731 00:07:51.718 04:24:48 blockdev_nvme_gpt -- common/autotest_common.sh@954 -- # '[' -z 60731 ']' 00:07:51.718 04:24:48 blockdev_nvme_gpt -- common/autotest_common.sh@958 -- # kill -0 60731 00:07:51.718 04:24:48 blockdev_nvme_gpt -- common/autotest_common.sh@959 -- # uname 00:07:51.718 04:24:48 blockdev_nvme_gpt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:51.718 04:24:48 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60731 00:07:51.718 04:24:48 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:51.718 killing process with pid 60731 00:07:51.718 04:24:48 blockdev_nvme_gpt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:51.718 04:24:48 blockdev_nvme_gpt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60731' 00:07:51.718 04:24:48 blockdev_nvme_gpt -- common/autotest_common.sh@973 -- # kill 60731 00:07:51.718 04:24:48 blockdev_nvme_gpt -- common/autotest_common.sh@978 -- # wait 60731 00:07:53.105 04:24:49 blockdev_nvme_gpt -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:07:53.105 04:24:49 blockdev_nvme_gpt -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:07:53.105 04:24:49 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:07:53.105 04:24:49 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:53.105 04:24:49 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:53.105 ************************************ 00:07:53.105 START TEST bdev_hello_world 00:07:53.105 ************************************ 00:07:53.105 04:24:49 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:07:53.105 [2024-11-27 04:24:49.394387] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:07:53.105 [2024-11-27 04:24:49.394512] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61356 ] 00:07:53.105 [2024-11-27 04:24:49.550289] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:53.105 [2024-11-27 04:24:49.630139] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:53.678 [2024-11-27 04:24:50.126260] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:07:53.678 [2024-11-27 04:24:50.126310] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:07:53.678 [2024-11-27 04:24:50.126330] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:07:53.678 [2024-11-27 04:24:50.128752] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:07:53.678 [2024-11-27 04:24:50.129177] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:07:53.678 [2024-11-27 04:24:50.129213] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:07:53.678 [2024-11-27 04:24:50.129384] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:07:53.678 00:07:53.678 [2024-11-27 04:24:50.129407] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:07:54.250 00:07:54.250 real 0m1.500s 00:07:54.250 user 0m1.238s 00:07:54.250 sys 0m0.155s 00:07:54.250 04:24:50 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:54.250 04:24:50 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:07:54.250 ************************************ 00:07:54.250 END TEST bdev_hello_world 00:07:54.250 ************************************ 00:07:54.511 04:24:50 blockdev_nvme_gpt -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:07:54.511 04:24:50 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:54.511 04:24:50 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:54.511 04:24:50 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:54.511 ************************************ 00:07:54.511 START TEST bdev_bounds 00:07:54.511 ************************************ 00:07:54.511 04:24:50 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:07:54.511 04:24:50 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=61392 00:07:54.511 04:24:50 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:07:54.511 Process bdevio pid: 61392 00:07:54.511 04:24:50 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 61392' 00:07:54.511 04:24:50 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 61392 00:07:54.511 04:24:50 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 61392 ']' 00:07:54.511 04:24:50 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:54.511 04:24:50 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:07:54.511 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:54.511 04:24:50 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:54.511 04:24:50 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:54.511 04:24:50 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:54.511 04:24:50 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:07:54.511 [2024-11-27 04:24:50.931813] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:07:54.511 [2024-11-27 04:24:50.931932] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61392 ] 00:07:54.511 [2024-11-27 04:24:51.083259] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:54.773 [2024-11-27 04:24:51.159903] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:54.773 [2024-11-27 04:24:51.160047] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:54.773 [2024-11-27 04:24:51.160131] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:55.346 04:24:51 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:55.346 04:24:51 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:07:55.346 04:24:51 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:07:55.346 I/O targets: 00:07:55.346 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:07:55.346 Nvme1n1p1: 655104 blocks of 4096 bytes (2559 MiB) 00:07:55.346 Nvme1n1p2: 655103 blocks of 4096 bytes (2559 MiB) 00:07:55.346 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:55.346 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:55.346 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:55.346 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:07:55.346 00:07:55.346 00:07:55.346 CUnit - A unit testing framework for C - Version 2.1-3 00:07:55.346 http://cunit.sourceforge.net/ 00:07:55.346 00:07:55.346 00:07:55.346 Suite: bdevio tests on: Nvme3n1 00:07:55.346 Test: blockdev write read block ...passed 00:07:55.346 Test: blockdev write zeroes read block ...passed 00:07:55.346 Test: blockdev write zeroes read no split ...passed 00:07:55.346 Test: blockdev write zeroes read split ...passed 00:07:55.346 Test: blockdev write zeroes read split partial ...passed 00:07:55.346 Test: blockdev reset ...[2024-11-27 04:24:51.849916] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 00:07:55.346 passed 00:07:55.346 Test: blockdev write read 8 blocks ...[2024-11-27 04:24:51.852736] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller successful. 00:07:55.346 passed 00:07:55.346 Test: blockdev write read size > 128k ...passed 00:07:55.346 Test: blockdev write read invalid size ...passed 00:07:55.346 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:55.346 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:55.346 Test: blockdev write read max offset ...passed 00:07:55.346 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:55.346 Test: blockdev writev readv 8 blocks ...passed 00:07:55.346 Test: blockdev writev readv 30 x 1block ...passed 00:07:55.346 Test: blockdev writev readv block ...passed 00:07:55.346 Test: blockdev writev readv size > 128k ...passed 00:07:55.346 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:55.346 Test: blockdev comparev and writev ...[2024-11-27 04:24:51.859967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2b1c04000 len:0x1000 00:07:55.346 [2024-11-27 04:24:51.860012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:55.346 passed 00:07:55.346 Test: blockdev nvme passthru rw ...passed 00:07:55.346 Test: blockdev nvme passthru vendor specific ...passed 00:07:55.346 Test: blockdev nvme admin passthru ...[2024-11-27 04:24:51.860600] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:55.346 [2024-11-27 04:24:51.860624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:55.346 passed 00:07:55.347 Test: blockdev copy ...passed 00:07:55.347 Suite: bdevio tests on: Nvme2n3 00:07:55.347 Test: blockdev write read block ...passed 00:07:55.347 Test: blockdev write zeroes read block ...passed 00:07:55.347 Test: blockdev write zeroes read no split ...passed 00:07:55.347 Test: blockdev write zeroes read split ...passed 00:07:55.347 Test: blockdev write zeroes read split partial ...passed 00:07:55.347 Test: blockdev reset ...[2024-11-27 04:24:51.926420] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:07:55.347 passed 00:07:55.347 Test: blockdev write read 8 blocks ...[2024-11-27 04:24:51.929026] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:07:55.347 passed 00:07:55.347 Test: blockdev write read size > 128k ...passed 00:07:55.347 Test: blockdev write read invalid size ...passed 00:07:55.347 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:55.347 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:55.347 Test: blockdev write read max offset ...passed 00:07:55.607 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:55.607 Test: blockdev writev readv 8 blocks ...passed 00:07:55.607 Test: blockdev writev readv 30 x 1block ...passed 00:07:55.607 Test: blockdev writev readv block ...passed 00:07:55.607 Test: blockdev writev readv size > 128k ...passed 00:07:55.607 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:55.607 Test: blockdev comparev and writev ...[2024-11-27 04:24:51.936243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2b1c02000 len:0x1000 00:07:55.607 [2024-11-27 04:24:51.936279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:55.607 passed 00:07:55.607 Test: blockdev nvme passthru rw ...passed 00:07:55.607 Test: blockdev nvme passthru vendor specific ...passed 00:07:55.607 Test: blockdev nvme admin passthru ...[2024-11-27 04:24:51.936850] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:55.607 [2024-11-27 04:24:51.936873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:55.607 passed 00:07:55.607 Test: blockdev copy ...passed 00:07:55.607 Suite: bdevio tests on: Nvme2n2 00:07:55.607 Test: blockdev write read block ...passed 00:07:55.607 Test: blockdev write zeroes read block ...passed 00:07:55.607 Test: blockdev write zeroes read no split ...passed 00:07:55.607 Test: blockdev write zeroes read split ...passed 00:07:55.607 Test: blockdev write zeroes read split partial ...passed 00:07:55.607 Test: blockdev reset ...[2024-11-27 04:24:51.992319] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:07:55.608 passed 00:07:55.608 Test: blockdev write read 8 blocks ...[2024-11-27 04:24:51.995024] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:07:55.608 passed 00:07:55.608 Test: blockdev write read size > 128k ...passed 00:07:55.608 Test: blockdev write read invalid size ...passed 00:07:55.608 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:55.608 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:55.608 Test: blockdev write read max offset ...passed 00:07:55.608 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:55.608 Test: blockdev writev readv 8 blocks ...passed 00:07:55.608 Test: blockdev writev readv 30 x 1block ...passed 00:07:55.608 Test: blockdev writev readv block ...passed 00:07:55.608 Test: blockdev writev readv size > 128k ...passed 00:07:55.608 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:55.608 Test: blockdev comparev and writev ...[2024-11-27 04:24:52.002229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d8c38000 len:0x1000 00:07:55.608 [2024-11-27 04:24:52.002265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:55.608 passed 00:07:55.608 Test: blockdev nvme passthru rw ...passed 00:07:55.608 Test: blockdev nvme passthru vendor specific ...passed 00:07:55.608 Test: blockdev nvme admin passthru ...[2024-11-27 04:24:52.003003] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:55.608 [2024-11-27 04:24:52.003027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:55.608 passed 00:07:55.608 Test: blockdev copy ...passed 00:07:55.608 Suite: bdevio tests on: Nvme2n1 00:07:55.608 Test: blockdev write read block ...passed 00:07:55.608 Test: blockdev write zeroes read block ...passed 00:07:55.608 Test: blockdev write zeroes read no split ...passed 00:07:55.608 Test: blockdev write zeroes read split ...passed 00:07:55.608 Test: blockdev write zeroes read split partial ...passed 00:07:55.608 Test: blockdev reset ...[2024-11-27 04:24:52.064979] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:07:55.608 [2024-11-27 04:24:52.067665] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:07:55.608 passed 00:07:55.608 Test: blockdev write read 8 blocks ...passed 00:07:55.608 Test: blockdev write read size > 128k ...passed 00:07:55.608 Test: blockdev write read invalid size ...passed 00:07:55.608 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:55.608 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:55.608 Test: blockdev write read max offset ...passed 00:07:55.608 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:55.608 Test: blockdev writev readv 8 blocks ...passed 00:07:55.608 Test: blockdev writev readv 30 x 1block ...passed 00:07:55.608 Test: blockdev writev readv block ...passed 00:07:55.608 Test: blockdev writev readv size > 128k ...passed 00:07:55.608 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:55.608 Test: blockdev comparev and writev ...passed 00:07:55.608 Test: blockdev nvme passthru rw ...[2024-11-27 04:24:52.075799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d8c34000 len:0x1000 00:07:55.608 [2024-11-27 04:24:52.075835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:55.608 passed 00:07:55.608 Test: blockdev nvme passthru vendor specific ...passed 00:07:55.608 Test: blockdev nvme admin passthru ...[2024-11-27 04:24:52.076430] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:55.608 [2024-11-27 04:24:52.076451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:55.608 passed 00:07:55.608 Test: blockdev copy ...passed 00:07:55.608 Suite: bdevio tests on: Nvme1n1p2 00:07:55.608 Test: blockdev write read block ...passed 00:07:55.608 Test: blockdev write zeroes read block ...passed 00:07:55.608 Test: blockdev write zeroes read no split ...passed 00:07:55.608 Test: blockdev write zeroes read split ...passed 00:07:55.608 Test: blockdev write zeroes read split partial ...passed 00:07:55.608 Test: blockdev reset ...[2024-11-27 04:24:52.132644] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:07:55.608 passed 00:07:55.608 Test: blockdev write read 8 blocks ...[2024-11-27 04:24:52.135033] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:07:55.608 passed 00:07:55.608 Test: blockdev write read size > 128k ...passed 00:07:55.608 Test: blockdev write read invalid size ...passed 00:07:55.608 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:55.608 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:55.608 Test: blockdev write read max offset ...passed 00:07:55.608 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:55.608 Test: blockdev writev readv 8 blocks ...passed 00:07:55.608 Test: blockdev writev readv 30 x 1block ...passed 00:07:55.608 Test: blockdev writev readv block ...passed 00:07:55.608 Test: blockdev writev readv size > 128k ...passed 00:07:55.608 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:55.608 Test: blockdev comparev and writev ...[2024-11-27 04:24:52.142338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:655360 len:1 SGL DATA BLOCK ADDRESS 0x2d8c30000 len:0x1000 00:07:55.608 [2024-11-27 04:24:52.142373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:55.608 passed 00:07:55.608 Test: blockdev nvme passthru rw ...passed 00:07:55.608 Test: blockdev nvme passthru vendor specific ...passed 00:07:55.608 Test: blockdev nvme admin passthru ...passed 00:07:55.608 Test: blockdev copy ...passed 00:07:55.608 Suite: bdevio tests on: Nvme1n1p1 00:07:55.608 Test: blockdev write read block ...passed 00:07:55.608 Test: blockdev write zeroes read block ...passed 00:07:55.608 Test: blockdev write zeroes read no split ...passed 00:07:55.608 Test: blockdev write zeroes read split ...passed 00:07:55.608 Test: blockdev write zeroes read split partial ...passed 00:07:55.608 Test: blockdev reset ...[2024-11-27 04:24:52.187737] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:07:55.608 passed 00:07:55.608 Test: blockdev write read 8 blocks ...[2024-11-27 04:24:52.190180] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:07:55.608 passed 00:07:55.608 Test: blockdev write read size > 128k ...passed 00:07:55.867 Test: blockdev write read invalid size ...passed 00:07:55.867 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:55.867 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:55.867 Test: blockdev write read max offset ...passed 00:07:55.867 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:55.867 Test: blockdev writev readv 8 blocks ...passed 00:07:55.867 Test: blockdev writev readv 30 x 1block ...passed 00:07:55.867 Test: blockdev writev readv block ...passed 00:07:55.867 Test: blockdev writev readv size > 128k ...passed 00:07:55.867 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:55.867 Test: blockdev comparev and writev ...[2024-11-27 04:24:52.197414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:256 len:1 SGL DATA BLOCK ADDRESS 0x2b260e000 len:0x1000 00:07:55.867 [2024-11-27 04:24:52.197448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:55.867 passed 00:07:55.867 Test: blockdev nvme passthru rw ...passed 00:07:55.867 Test: blockdev nvme passthru vendor specific ...passed 00:07:55.867 Test: blockdev nvme admin passthru ...passed 00:07:55.867 Test: blockdev copy ...passed 00:07:55.867 Suite: bdevio tests on: Nvme0n1 00:07:55.867 Test: blockdev write read block ...passed 00:07:55.867 Test: blockdev write zeroes read block ...passed 00:07:55.867 Test: blockdev write zeroes read no split ...passed 00:07:55.867 Test: blockdev write zeroes read split ...passed 00:07:55.867 Test: blockdev write zeroes read split partial ...passed 00:07:55.867 Test: blockdev reset ...[2024-11-27 04:24:52.245344] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:07:55.867 [2024-11-27 04:24:52.247711] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:07:55.867 passed 00:07:55.867 Test: blockdev write read 8 blocks ...passed 00:07:55.867 Test: blockdev write read size > 128k ...passed 00:07:55.867 Test: blockdev write read invalid size ...passed 00:07:55.867 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:55.867 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:55.867 Test: blockdev write read max offset ...passed 00:07:55.867 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:55.867 Test: blockdev writev readv 8 blocks ...passed 00:07:55.867 Test: blockdev writev readv 30 x 1block ...passed 00:07:55.867 Test: blockdev writev readv block ...passed 00:07:55.867 Test: blockdev writev readv size > 128k ...passed 00:07:55.867 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:55.867 Test: blockdev comparev and writev ...[2024-11-27 04:24:52.254016] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:07:55.867 separate metadata which is not supported yet. 00:07:55.867 passed 00:07:55.867 Test: blockdev nvme passthru rw ...passed 00:07:55.867 Test: blockdev nvme passthru vendor specific ...passed 00:07:55.867 Test: blockdev nvme admin passthru ...[2024-11-27 04:24:52.254440] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:07:55.867 [2024-11-27 04:24:52.254471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:07:55.867 passed 00:07:55.867 Test: blockdev copy ...passed 00:07:55.867 00:07:55.867 Run Summary: Type Total Ran Passed Failed Inactive 00:07:55.867 suites 7 7 n/a 0 0 00:07:55.867 tests 161 161 161 0 0 00:07:55.867 asserts 1025 1025 1025 0 n/a 00:07:55.867 00:07:55.867 Elapsed time = 1.202 seconds 00:07:55.867 0 00:07:55.867 04:24:52 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 61392 00:07:55.867 04:24:52 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 61392 ']' 00:07:55.867 04:24:52 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 61392 00:07:55.867 04:24:52 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:07:55.867 04:24:52 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:55.867 04:24:52 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61392 00:07:55.867 killing process with pid 61392 00:07:55.867 04:24:52 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:55.867 04:24:52 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:55.867 04:24:52 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61392' 00:07:55.867 04:24:52 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@973 -- # kill 61392 00:07:55.867 04:24:52 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@978 -- # wait 61392 00:07:56.439 ************************************ 00:07:56.439 END TEST bdev_bounds 00:07:56.439 ************************************ 00:07:56.439 04:24:52 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:07:56.439 00:07:56.439 real 0m1.959s 00:07:56.439 user 0m5.026s 00:07:56.439 sys 0m0.251s 00:07:56.439 04:24:52 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:56.439 04:24:52 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:07:56.439 04:24:52 blockdev_nvme_gpt -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:07:56.439 04:24:52 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:56.439 04:24:52 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:56.439 04:24:52 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:56.439 ************************************ 00:07:56.439 START TEST bdev_nbd 00:07:56.439 ************************************ 00:07:56.439 04:24:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:07:56.439 04:24:52 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:07:56.439 04:24:52 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:07:56.439 04:24:52 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:56.439 04:24:52 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:56.439 04:24:52 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:56.439 04:24:52 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:07:56.439 04:24:52 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=7 00:07:56.439 04:24:52 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:07:56.439 04:24:52 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:07:56.439 04:24:52 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:07:56.439 04:24:52 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=7 00:07:56.439 04:24:52 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:56.439 04:24:52 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:07:56.439 04:24:52 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:56.439 04:24:52 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:07:56.439 04:24:52 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=61446 00:07:56.439 04:24:52 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:07:56.439 04:24:52 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 61446 /var/tmp/spdk-nbd.sock 00:07:56.439 04:24:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 61446 ']' 00:07:56.439 04:24:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:56.439 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:56.439 04:24:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:56.439 04:24:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:56.439 04:24:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:56.439 04:24:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:07:56.439 04:24:52 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:07:56.439 [2024-11-27 04:24:52.939247] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:07:56.439 [2024-11-27 04:24:52.939362] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:56.701 [2024-11-27 04:24:53.095472] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:56.701 [2024-11-27 04:24:53.171355] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:57.274 04:24:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:57.274 04:24:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:07:57.274 04:24:53 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:07:57.274 04:24:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:57.274 04:24:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:57.274 04:24:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:07:57.274 04:24:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:07:57.274 04:24:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:57.274 04:24:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:57.274 04:24:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:07:57.274 04:24:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:07:57.274 04:24:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:07:57.274 04:24:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:07:57.274 04:24:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:57.274 04:24:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:07:57.535 04:24:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:07:57.535 04:24:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:07:57.535 04:24:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:07:57.535 04:24:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:57.535 04:24:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:57.535 04:24:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:57.535 04:24:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:57.535 04:24:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:57.535 04:24:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:57.535 04:24:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:57.535 04:24:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:57.535 04:24:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:57.535 1+0 records in 00:07:57.535 1+0 records out 00:07:57.535 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000429812 s, 9.5 MB/s 00:07:57.535 04:24:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:57.535 04:24:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:57.535 04:24:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:57.535 04:24:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:57.535 04:24:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:57.535 04:24:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:57.535 04:24:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:57.535 04:24:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 00:07:57.797 04:24:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:07:57.797 04:24:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:07:57.797 04:24:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:07:57.797 04:24:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:07:57.797 04:24:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:57.797 04:24:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:57.797 04:24:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:57.797 04:24:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:07:57.797 04:24:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:57.797 04:24:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:57.797 04:24:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:57.797 04:24:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:57.797 1+0 records in 00:07:57.797 1+0 records out 00:07:57.797 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000480781 s, 8.5 MB/s 00:07:57.797 04:24:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:57.797 04:24:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:57.797 04:24:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:57.797 04:24:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:57.797 04:24:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:57.797 04:24:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:57.797 04:24:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:57.797 04:24:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 00:07:58.058 04:24:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:07:58.058 04:24:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:07:58.058 04:24:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:07:58.058 04:24:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:07:58.058 04:24:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:58.058 04:24:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:58.058 04:24:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:58.058 04:24:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:07:58.058 04:24:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:58.058 04:24:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:58.058 04:24:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:58.058 04:24:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:58.058 1+0 records in 00:07:58.058 1+0 records out 00:07:58.058 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000562088 s, 7.3 MB/s 00:07:58.058 04:24:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:58.058 04:24:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:58.058 04:24:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:58.058 04:24:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:58.058 04:24:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:58.058 04:24:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:58.058 04:24:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:58.058 04:24:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:07:58.320 04:24:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:07:58.320 04:24:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:07:58.320 04:24:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:07:58.320 04:24:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:07:58.320 04:24:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:58.320 04:24:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:58.320 04:24:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:58.320 04:24:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:07:58.320 04:24:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:58.320 04:24:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:58.320 04:24:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:58.320 04:24:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:58.320 1+0 records in 00:07:58.320 1+0 records out 00:07:58.320 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00057811 s, 7.1 MB/s 00:07:58.320 04:24:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:58.320 04:24:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:58.320 04:24:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:58.320 04:24:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:58.320 04:24:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:58.320 04:24:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:58.320 04:24:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:58.320 04:24:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:07:58.581 04:24:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:07:58.581 04:24:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:07:58.581 04:24:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:07:58.581 04:24:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:07:58.581 04:24:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:58.581 04:24:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:58.581 04:24:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:58.581 04:24:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:07:58.581 04:24:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:58.581 04:24:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:58.581 04:24:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:58.581 04:24:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:58.581 1+0 records in 00:07:58.581 1+0 records out 00:07:58.581 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000485123 s, 8.4 MB/s 00:07:58.581 04:24:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:58.581 04:24:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:58.581 04:24:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:58.581 04:24:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:58.581 04:24:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:58.581 04:24:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:58.581 04:24:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:58.581 04:24:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:07:58.581 04:24:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:07:58.842 04:24:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:07:58.842 04:24:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:07:58.842 04:24:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:07:58.842 04:24:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:58.842 04:24:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:58.842 04:24:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:58.842 04:24:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:07:58.842 04:24:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:58.842 04:24:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:58.842 04:24:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:58.842 04:24:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:58.842 1+0 records in 00:07:58.842 1+0 records out 00:07:58.842 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000778864 s, 5.3 MB/s 00:07:58.842 04:24:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:58.842 04:24:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:58.842 04:24:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:58.842 04:24:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:58.842 04:24:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:58.842 04:24:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:58.842 04:24:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:58.842 04:24:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:07:58.842 04:24:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:07:58.842 04:24:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:07:58.842 04:24:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:07:58.842 04:24:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd6 00:07:58.842 04:24:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:58.842 04:24:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:58.842 04:24:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:58.842 04:24:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd6 /proc/partitions 00:07:58.842 04:24:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:58.842 04:24:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:58.842 04:24:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:58.842 04:24:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:58.842 1+0 records in 00:07:58.842 1+0 records out 00:07:58.842 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000411296 s, 10.0 MB/s 00:07:58.842 04:24:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:58.842 04:24:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:58.842 04:24:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:59.104 04:24:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:59.104 04:24:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:59.104 04:24:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:59.104 04:24:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:59.104 04:24:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:59.104 04:24:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:07:59.104 { 00:07:59.104 "nbd_device": "/dev/nbd0", 00:07:59.104 "bdev_name": "Nvme0n1" 00:07:59.104 }, 00:07:59.104 { 00:07:59.104 "nbd_device": "/dev/nbd1", 00:07:59.104 "bdev_name": "Nvme1n1p1" 00:07:59.104 }, 00:07:59.104 { 00:07:59.104 "nbd_device": "/dev/nbd2", 00:07:59.104 "bdev_name": "Nvme1n1p2" 00:07:59.104 }, 00:07:59.104 { 00:07:59.104 "nbd_device": "/dev/nbd3", 00:07:59.104 "bdev_name": "Nvme2n1" 00:07:59.104 }, 00:07:59.104 { 00:07:59.104 "nbd_device": "/dev/nbd4", 00:07:59.104 "bdev_name": "Nvme2n2" 00:07:59.104 }, 00:07:59.104 { 00:07:59.104 "nbd_device": "/dev/nbd5", 00:07:59.104 "bdev_name": "Nvme2n3" 00:07:59.104 }, 00:07:59.104 { 00:07:59.104 "nbd_device": "/dev/nbd6", 00:07:59.104 "bdev_name": "Nvme3n1" 00:07:59.104 } 00:07:59.104 ]' 00:07:59.104 04:24:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:07:59.104 04:24:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:07:59.104 { 00:07:59.104 "nbd_device": "/dev/nbd0", 00:07:59.104 "bdev_name": "Nvme0n1" 00:07:59.104 }, 00:07:59.104 { 00:07:59.104 "nbd_device": "/dev/nbd1", 00:07:59.104 "bdev_name": "Nvme1n1p1" 00:07:59.104 }, 00:07:59.104 { 00:07:59.104 "nbd_device": "/dev/nbd2", 00:07:59.104 "bdev_name": "Nvme1n1p2" 00:07:59.104 }, 00:07:59.104 { 00:07:59.104 "nbd_device": "/dev/nbd3", 00:07:59.104 "bdev_name": "Nvme2n1" 00:07:59.104 }, 00:07:59.104 { 00:07:59.104 "nbd_device": "/dev/nbd4", 00:07:59.104 "bdev_name": "Nvme2n2" 00:07:59.104 }, 00:07:59.104 { 00:07:59.104 "nbd_device": "/dev/nbd5", 00:07:59.104 "bdev_name": "Nvme2n3" 00:07:59.104 }, 00:07:59.104 { 00:07:59.104 "nbd_device": "/dev/nbd6", 00:07:59.104 "bdev_name": "Nvme3n1" 00:07:59.104 } 00:07:59.104 ]' 00:07:59.104 04:24:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:07:59.104 04:24:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6' 00:07:59.104 04:24:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:59.104 04:24:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6') 00:07:59.104 04:24:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:59.104 04:24:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:59.104 04:24:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:59.104 04:24:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:59.365 04:24:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:59.365 04:24:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:59.365 04:24:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:59.365 04:24:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:59.365 04:24:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:59.365 04:24:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:59.365 04:24:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:59.366 04:24:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:59.366 04:24:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:59.366 04:24:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:59.628 04:24:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:59.628 04:24:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:59.628 04:24:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:59.628 04:24:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:59.628 04:24:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:59.628 04:24:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:59.628 04:24:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:59.628 04:24:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:59.628 04:24:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:59.628 04:24:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:07:59.889 04:24:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:07:59.889 04:24:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:07:59.889 04:24:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:07:59.889 04:24:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:59.889 04:24:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:59.889 04:24:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:07:59.889 04:24:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:59.889 04:24:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:59.889 04:24:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:59.889 04:24:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:08:00.150 04:24:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:08:00.150 04:24:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:08:00.150 04:24:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:08:00.150 04:24:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:00.150 04:24:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:00.150 04:24:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:08:00.150 04:24:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:00.150 04:24:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:00.150 04:24:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:00.150 04:24:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:08:00.150 04:24:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:08:00.150 04:24:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:08:00.150 04:24:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:08:00.150 04:24:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:00.150 04:24:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:00.150 04:24:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:08:00.150 04:24:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:00.150 04:24:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:00.150 04:24:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:00.150 04:24:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:08:00.411 04:24:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:08:00.411 04:24:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:08:00.411 04:24:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:08:00.411 04:24:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:00.411 04:24:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:00.411 04:24:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:08:00.411 04:24:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:00.411 04:24:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:00.411 04:24:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:00.411 04:24:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:08:00.672 04:24:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:08:00.672 04:24:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:08:00.672 04:24:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:08:00.672 04:24:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:00.672 04:24:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:00.672 04:24:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:08:00.672 04:24:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:00.672 04:24:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:00.673 04:24:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:00.673 04:24:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:00.673 04:24:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:00.934 04:24:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:00.934 04:24:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:00.934 04:24:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:00.935 04:24:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:00.935 04:24:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:00.935 04:24:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:08:00.935 04:24:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:08:00.935 04:24:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:08:00.935 04:24:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:08:00.935 04:24:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:08:00.935 04:24:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:08:00.935 04:24:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:08:00.935 04:24:57 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:08:00.935 04:24:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:00.935 04:24:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:00.935 04:24:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:08:00.935 04:24:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:08:00.935 04:24:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:08:00.935 04:24:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:08:00.935 04:24:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:00.935 04:24:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:00.935 04:24:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:08:00.935 04:24:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:08:00.935 04:24:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:08:00.935 04:24:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:08:00.935 04:24:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:08:00.935 04:24:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:00.935 04:24:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:08:01.196 /dev/nbd0 00:08:01.196 04:24:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:08:01.196 04:24:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:08:01.196 04:24:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:08:01.196 04:24:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:08:01.196 04:24:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:01.196 04:24:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:01.196 04:24:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:08:01.196 04:24:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:08:01.196 04:24:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:01.196 04:24:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:01.196 04:24:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:01.196 1+0 records in 00:08:01.196 1+0 records out 00:08:01.196 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000547384 s, 7.5 MB/s 00:08:01.196 04:24:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:01.196 04:24:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:08:01.196 04:24:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:01.196 04:24:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:01.196 04:24:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:08:01.196 04:24:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:01.196 04:24:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:01.196 04:24:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 /dev/nbd1 00:08:01.456 /dev/nbd1 00:08:01.456 04:24:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:08:01.456 04:24:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:08:01.456 04:24:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:08:01.456 04:24:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:08:01.456 04:24:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:01.456 04:24:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:01.456 04:24:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:08:01.456 04:24:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:08:01.456 04:24:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:01.456 04:24:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:01.456 04:24:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:01.456 1+0 records in 00:08:01.456 1+0 records out 00:08:01.456 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000995779 s, 4.1 MB/s 00:08:01.456 04:24:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:01.456 04:24:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:08:01.456 04:24:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:01.456 04:24:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:01.456 04:24:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:08:01.456 04:24:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:01.456 04:24:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:01.456 04:24:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 /dev/nbd10 00:08:01.717 /dev/nbd10 00:08:01.717 04:24:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:08:01.717 04:24:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:08:01.717 04:24:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:08:01.717 04:24:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:08:01.717 04:24:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:01.717 04:24:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:01.717 04:24:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:08:01.717 04:24:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:08:01.717 04:24:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:01.717 04:24:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:01.717 04:24:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:01.717 1+0 records in 00:08:01.717 1+0 records out 00:08:01.717 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000640796 s, 6.4 MB/s 00:08:01.717 04:24:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:01.717 04:24:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:08:01.717 04:24:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:01.717 04:24:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:01.717 04:24:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:08:01.717 04:24:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:01.717 04:24:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:01.717 04:24:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd11 00:08:01.978 /dev/nbd11 00:08:01.978 04:24:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:08:01.978 04:24:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:08:01.978 04:24:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:08:01.978 04:24:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:08:01.978 04:24:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:01.978 04:24:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:01.978 04:24:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:08:01.978 04:24:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:08:01.978 04:24:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:01.978 04:24:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:01.978 04:24:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:01.978 1+0 records in 00:08:01.978 1+0 records out 00:08:01.978 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00126918 s, 3.2 MB/s 00:08:01.978 04:24:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:01.978 04:24:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:08:01.978 04:24:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:01.978 04:24:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:01.978 04:24:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:08:01.978 04:24:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:01.978 04:24:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:01.978 04:24:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd12 00:08:01.978 /dev/nbd12 00:08:02.240 04:24:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:08:02.240 04:24:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:08:02.240 04:24:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:08:02.240 04:24:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:08:02.240 04:24:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:02.240 04:24:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:02.240 04:24:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:08:02.240 04:24:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:08:02.240 04:24:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:02.240 04:24:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:02.240 04:24:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:02.240 1+0 records in 00:08:02.240 1+0 records out 00:08:02.240 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00110672 s, 3.7 MB/s 00:08:02.240 04:24:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:02.240 04:24:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:08:02.240 04:24:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:02.240 04:24:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:02.240 04:24:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:08:02.240 04:24:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:02.240 04:24:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:02.240 04:24:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd13 00:08:02.240 /dev/nbd13 00:08:02.502 04:24:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:08:02.502 04:24:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:08:02.502 04:24:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:08:02.502 04:24:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:08:02.502 04:24:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:02.502 04:24:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:02.502 04:24:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:08:02.502 04:24:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:08:02.502 04:24:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:02.502 04:24:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:02.502 04:24:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:02.502 1+0 records in 00:08:02.502 1+0 records out 00:08:02.502 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00174046 s, 2.4 MB/s 00:08:02.502 04:24:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:02.502 04:24:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:08:02.502 04:24:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:02.502 04:24:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:02.502 04:24:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:08:02.502 04:24:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:02.502 04:24:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:02.503 04:24:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd14 00:08:02.503 /dev/nbd14 00:08:02.503 04:24:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:08:02.503 04:24:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:08:02.503 04:24:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd14 00:08:02.503 04:24:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:08:02.503 04:24:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:02.503 04:24:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:02.503 04:24:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd14 /proc/partitions 00:08:02.503 04:24:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:08:02.503 04:24:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:02.503 04:24:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:02.775 04:24:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:02.775 1+0 records in 00:08:02.775 1+0 records out 00:08:02.775 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00127713 s, 3.2 MB/s 00:08:02.775 04:24:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:02.775 04:24:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:08:02.775 04:24:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:02.775 04:24:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:02.775 04:24:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:08:02.775 04:24:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:02.775 04:24:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:02.775 04:24:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:02.775 04:24:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:02.775 04:24:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:02.775 04:24:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:08:02.775 { 00:08:02.775 "nbd_device": "/dev/nbd0", 00:08:02.775 "bdev_name": "Nvme0n1" 00:08:02.775 }, 00:08:02.775 { 00:08:02.775 "nbd_device": "/dev/nbd1", 00:08:02.775 "bdev_name": "Nvme1n1p1" 00:08:02.775 }, 00:08:02.776 { 00:08:02.776 "nbd_device": "/dev/nbd10", 00:08:02.776 "bdev_name": "Nvme1n1p2" 00:08:02.776 }, 00:08:02.776 { 00:08:02.776 "nbd_device": "/dev/nbd11", 00:08:02.776 "bdev_name": "Nvme2n1" 00:08:02.776 }, 00:08:02.776 { 00:08:02.776 "nbd_device": "/dev/nbd12", 00:08:02.776 "bdev_name": "Nvme2n2" 00:08:02.776 }, 00:08:02.776 { 00:08:02.776 "nbd_device": "/dev/nbd13", 00:08:02.776 "bdev_name": "Nvme2n3" 00:08:02.776 }, 00:08:02.776 { 00:08:02.776 "nbd_device": "/dev/nbd14", 00:08:02.776 "bdev_name": "Nvme3n1" 00:08:02.776 } 00:08:02.776 ]' 00:08:02.776 04:24:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:08:02.776 { 00:08:02.776 "nbd_device": "/dev/nbd0", 00:08:02.776 "bdev_name": "Nvme0n1" 00:08:02.776 }, 00:08:02.776 { 00:08:02.776 "nbd_device": "/dev/nbd1", 00:08:02.776 "bdev_name": "Nvme1n1p1" 00:08:02.776 }, 00:08:02.776 { 00:08:02.776 "nbd_device": "/dev/nbd10", 00:08:02.776 "bdev_name": "Nvme1n1p2" 00:08:02.776 }, 00:08:02.776 { 00:08:02.776 "nbd_device": "/dev/nbd11", 00:08:02.776 "bdev_name": "Nvme2n1" 00:08:02.776 }, 00:08:02.776 { 00:08:02.776 "nbd_device": "/dev/nbd12", 00:08:02.776 "bdev_name": "Nvme2n2" 00:08:02.776 }, 00:08:02.776 { 00:08:02.776 "nbd_device": "/dev/nbd13", 00:08:02.776 "bdev_name": "Nvme2n3" 00:08:02.776 }, 00:08:02.776 { 00:08:02.776 "nbd_device": "/dev/nbd14", 00:08:02.776 "bdev_name": "Nvme3n1" 00:08:02.776 } 00:08:02.776 ]' 00:08:02.776 04:24:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:02.776 04:24:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:08:02.776 /dev/nbd1 00:08:02.776 /dev/nbd10 00:08:02.776 /dev/nbd11 00:08:02.777 /dev/nbd12 00:08:02.777 /dev/nbd13 00:08:02.777 /dev/nbd14' 00:08:02.777 04:24:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:08:02.777 /dev/nbd1 00:08:02.777 /dev/nbd10 00:08:02.777 /dev/nbd11 00:08:02.777 /dev/nbd12 00:08:02.777 /dev/nbd13 00:08:02.777 /dev/nbd14' 00:08:02.777 04:24:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:03.047 04:24:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=7 00:08:03.047 04:24:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 7 00:08:03.047 04:24:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=7 00:08:03.047 04:24:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 7 -ne 7 ']' 00:08:03.047 04:24:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' write 00:08:03.047 04:24:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:08:03.047 04:24:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:03.047 04:24:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:08:03.047 04:24:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:08:03.047 04:24:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:08:03.047 04:24:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:08:03.047 256+0 records in 00:08:03.047 256+0 records out 00:08:03.047 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00914536 s, 115 MB/s 00:08:03.047 04:24:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:03.047 04:24:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:08:03.047 256+0 records in 00:08:03.047 256+0 records out 00:08:03.047 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.180388 s, 5.8 MB/s 00:08:03.047 04:24:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:03.047 04:24:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:08:03.309 256+0 records in 00:08:03.309 256+0 records out 00:08:03.309 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.203672 s, 5.1 MB/s 00:08:03.309 04:24:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:03.309 04:24:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:08:03.571 256+0 records in 00:08:03.571 256+0 records out 00:08:03.571 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.147614 s, 7.1 MB/s 00:08:03.571 04:24:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:03.571 04:24:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:08:03.571 256+0 records in 00:08:03.571 256+0 records out 00:08:03.571 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.237927 s, 4.4 MB/s 00:08:03.833 04:25:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:03.833 04:25:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:08:03.833 256+0 records in 00:08:03.833 256+0 records out 00:08:03.833 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.122807 s, 8.5 MB/s 00:08:03.833 04:25:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:03.833 04:25:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:08:04.094 256+0 records in 00:08:04.094 256+0 records out 00:08:04.094 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.232689 s, 4.5 MB/s 00:08:04.094 04:25:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:04.094 04:25:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:08:04.355 256+0 records in 00:08:04.355 256+0 records out 00:08:04.355 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.258334 s, 4.1 MB/s 00:08:04.355 04:25:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' verify 00:08:04.355 04:25:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:08:04.355 04:25:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:04.355 04:25:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:08:04.355 04:25:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:08:04.355 04:25:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:08:04.355 04:25:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:08:04.355 04:25:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:04.355 04:25:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:08:04.355 04:25:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:04.355 04:25:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:08:04.355 04:25:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:04.355 04:25:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:08:04.355 04:25:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:04.355 04:25:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:08:04.355 04:25:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:04.355 04:25:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:08:04.355 04:25:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:04.355 04:25:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:08:04.355 04:25:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:04.355 04:25:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:08:04.355 04:25:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:08:04.355 04:25:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:08:04.355 04:25:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:04.355 04:25:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:08:04.355 04:25:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:04.355 04:25:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:08:04.355 04:25:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:04.355 04:25:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:04.617 04:25:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:04.617 04:25:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:04.617 04:25:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:04.617 04:25:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:04.617 04:25:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:04.617 04:25:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:04.617 04:25:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:04.617 04:25:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:04.617 04:25:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:04.617 04:25:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:04.878 04:25:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:04.878 04:25:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:04.878 04:25:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:04.878 04:25:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:04.878 04:25:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:04.878 04:25:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:04.878 04:25:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:04.878 04:25:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:04.878 04:25:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:04.878 04:25:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:08:05.142 04:25:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:08:05.142 04:25:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:08:05.142 04:25:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:08:05.142 04:25:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:05.142 04:25:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:05.142 04:25:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:08:05.142 04:25:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:05.142 04:25:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:05.142 04:25:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:05.142 04:25:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:08:05.142 04:25:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:08:05.142 04:25:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:08:05.142 04:25:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:08:05.142 04:25:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:05.142 04:25:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:05.142 04:25:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:08:05.142 04:25:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:05.142 04:25:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:05.142 04:25:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:05.142 04:25:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:08:05.404 04:25:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:08:05.404 04:25:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:08:05.404 04:25:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:08:05.404 04:25:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:05.404 04:25:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:05.404 04:25:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:08:05.404 04:25:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:05.404 04:25:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:05.404 04:25:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:05.404 04:25:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:08:05.666 04:25:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:08:05.666 04:25:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:08:05.666 04:25:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:08:05.666 04:25:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:05.666 04:25:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:05.666 04:25:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:08:05.666 04:25:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:05.666 04:25:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:05.666 04:25:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:05.666 04:25:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:08:05.928 04:25:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:08:05.928 04:25:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:08:05.928 04:25:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:08:05.928 04:25:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:05.928 04:25:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:05.928 04:25:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:08:05.928 04:25:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:05.928 04:25:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:05.928 04:25:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:05.928 04:25:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:05.928 04:25:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:05.928 04:25:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:05.928 04:25:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:05.928 04:25:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:05.928 04:25:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:05.928 04:25:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:05.928 04:25:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:08:05.928 04:25:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:08:05.928 04:25:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:08:05.928 04:25:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:08:05.928 04:25:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:08:05.928 04:25:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:08:05.928 04:25:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:08:05.928 04:25:02 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:08:05.928 04:25:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:05.928 04:25:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:08:05.928 04:25:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:08:06.189 malloc_lvol_verify 00:08:06.189 04:25:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:08:06.505 548fd2a9-d029-4a16-beaa-65028b73831c 00:08:06.505 04:25:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:08:06.777 d61770ad-8a2b-4f75-8f2a-be4dc3a713f7 00:08:06.777 04:25:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:08:06.777 /dev/nbd0 00:08:06.777 04:25:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:08:06.777 04:25:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:08:06.777 04:25:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:08:06.777 04:25:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:08:06.777 04:25:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:08:06.777 mke2fs 1.47.0 (5-Feb-2023) 00:08:06.777 Discarding device blocks: 0/4096 done 00:08:06.777 Creating filesystem with 4096 1k blocks and 1024 inodes 00:08:06.777 00:08:06.777 Allocating group tables: 0/1 done 00:08:06.777 Writing inode tables: 0/1 done 00:08:06.777 Creating journal (1024 blocks): done 00:08:06.777 Writing superblocks and filesystem accounting information: 0/1 done 00:08:06.777 00:08:06.777 04:25:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:08:06.777 04:25:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:06.777 04:25:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:08:06.777 04:25:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:06.777 04:25:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:08:06.777 04:25:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:06.777 04:25:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:07.041 04:25:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:07.041 04:25:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:07.041 04:25:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:07.041 04:25:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:07.041 04:25:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:07.041 04:25:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:07.041 04:25:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:08:07.041 04:25:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:08:07.041 04:25:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:07.041 04:25:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:07.041 04:25:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:07.041 04:25:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:07.041 04:25:03 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 61446 00:08:07.041 04:25:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 61446 ']' 00:08:07.041 04:25:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 61446 00:08:07.041 04:25:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:08:07.041 04:25:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:07.041 04:25:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61446 00:08:07.302 killing process with pid 61446 00:08:07.302 04:25:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:07.302 04:25:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:07.302 04:25:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61446' 00:08:07.302 04:25:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@973 -- # kill 61446 00:08:07.302 04:25:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@978 -- # wait 61446 00:08:07.874 ************************************ 00:08:07.874 END TEST bdev_nbd 00:08:07.874 ************************************ 00:08:07.874 04:25:04 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:08:07.874 00:08:07.874 real 0m11.572s 00:08:07.874 user 0m15.817s 00:08:07.874 sys 0m3.768s 00:08:07.874 04:25:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:07.874 04:25:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:08:08.136 04:25:04 blockdev_nvme_gpt -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:08:08.136 04:25:04 blockdev_nvme_gpt -- bdev/blockdev.sh@801 -- # '[' gpt = nvme ']' 00:08:08.136 skipping fio tests on NVMe due to multi-ns failures. 00:08:08.136 04:25:04 blockdev_nvme_gpt -- bdev/blockdev.sh@801 -- # '[' gpt = gpt ']' 00:08:08.136 04:25:04 blockdev_nvme_gpt -- bdev/blockdev.sh@803 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:08:08.136 04:25:04 blockdev_nvme_gpt -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:08:08.136 04:25:04 blockdev_nvme_gpt -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:08:08.136 04:25:04 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:08:08.136 04:25:04 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:08.136 04:25:04 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:08.136 ************************************ 00:08:08.136 START TEST bdev_verify 00:08:08.136 ************************************ 00:08:08.136 04:25:04 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:08:08.136 [2024-11-27 04:25:04.577290] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:08:08.136 [2024-11-27 04:25:04.577437] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61868 ] 00:08:08.398 [2024-11-27 04:25:04.736078] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:08.398 [2024-11-27 04:25:04.821155] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:08.398 [2024-11-27 04:25:04.821155] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:08.971 Running I/O for 5 seconds... 00:08:11.287 25344.00 IOPS, 99.00 MiB/s [2024-11-27T04:25:08.817Z] 22528.00 IOPS, 88.00 MiB/s [2024-11-27T04:25:09.754Z] 21973.33 IOPS, 85.83 MiB/s [2024-11-27T04:25:10.695Z] 21216.00 IOPS, 82.88 MiB/s [2024-11-27T04:25:10.695Z] 20928.00 IOPS, 81.75 MiB/s 00:08:14.108 Latency(us) 00:08:14.108 [2024-11-27T04:25:10.695Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:14.108 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:14.108 Verification LBA range: start 0x0 length 0xbd0bd 00:08:14.108 Nvme0n1 : 5.06 1493.71 5.83 0.00 0.00 85534.49 15022.87 71787.13 00:08:14.108 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:14.108 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:08:14.108 Nvme0n1 : 5.06 1467.98 5.73 0.00 0.00 86916.01 16333.59 84289.38 00:08:14.108 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:14.108 Verification LBA range: start 0x0 length 0x4ff80 00:08:14.108 Nvme1n1p1 : 5.06 1493.31 5.83 0.00 0.00 85452.87 14115.45 70173.93 00:08:14.108 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:14.108 Verification LBA range: start 0x4ff80 length 0x4ff80 00:08:14.108 Nvme1n1p1 : 5.06 1466.22 5.73 0.00 0.00 86814.24 17946.78 79449.80 00:08:14.108 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:14.108 Verification LBA range: start 0x0 length 0x4ff7f 00:08:14.108 Nvme1n1p2 : 5.06 1492.38 5.83 0.00 0.00 85339.94 15224.52 70173.93 00:08:14.108 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:14.108 Verification LBA range: start 0x4ff7f length 0x4ff7f 00:08:14.108 Nvme1n1p2 : 5.07 1465.05 5.72 0.00 0.00 86711.45 19055.85 76223.41 00:08:14.108 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:14.108 Verification LBA range: start 0x0 length 0x80000 00:08:14.108 Nvme2n1 : 5.06 1492.06 5.83 0.00 0.00 85203.69 14619.57 69770.63 00:08:14.108 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:14.108 Verification LBA range: start 0x80000 length 0x80000 00:08:14.108 Nvme2n1 : 5.07 1464.33 5.72 0.00 0.00 86560.47 20366.57 71383.83 00:08:14.108 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:14.108 Verification LBA range: start 0x0 length 0x80000 00:08:14.108 Nvme2n2 : 5.06 1491.74 5.83 0.00 0.00 85064.61 14115.45 71383.83 00:08:14.108 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:14.108 Verification LBA range: start 0x80000 length 0x80000 00:08:14.108 Nvme2n2 : 5.09 1472.04 5.75 0.00 0.00 86024.71 6553.60 72593.72 00:08:14.108 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:14.108 Verification LBA range: start 0x0 length 0x80000 00:08:14.108 Nvme2n3 : 5.06 1491.43 5.83 0.00 0.00 84950.82 14216.27 71787.13 00:08:14.108 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:14.108 Verification LBA range: start 0x80000 length 0x80000 00:08:14.108 Nvme2n3 : 5.09 1471.48 5.75 0.00 0.00 85886.50 7108.14 78239.90 00:08:14.108 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:14.108 Verification LBA range: start 0x0 length 0x20000 00:08:14.108 Nvme3n1 : 5.07 1490.45 5.82 0.00 0.00 84874.21 11494.01 72593.72 00:08:14.108 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:14.108 Verification LBA range: start 0x20000 length 0x20000 00:08:14.108 Nvme3n1 : 5.10 1479.90 5.78 0.00 0.00 85396.52 9931.22 83886.08 00:08:14.108 [2024-11-27T04:25:10.695Z] =================================================================================================================== 00:08:14.108 [2024-11-27T04:25:10.695Z] Total : 20732.08 80.98 0.00 0.00 85761.44 6553.60 84289.38 00:08:15.063 00:08:15.063 real 0m7.129s 00:08:15.063 user 0m13.359s 00:08:15.063 sys 0m0.219s 00:08:15.063 04:25:11 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:15.063 ************************************ 00:08:15.063 END TEST bdev_verify 00:08:15.063 ************************************ 00:08:15.063 04:25:11 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:08:15.352 04:25:11 blockdev_nvme_gpt -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:08:15.352 04:25:11 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:08:15.352 04:25:11 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:15.352 04:25:11 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:15.352 ************************************ 00:08:15.352 START TEST bdev_verify_big_io 00:08:15.352 ************************************ 00:08:15.352 04:25:11 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:08:15.352 [2024-11-27 04:25:11.754469] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:08:15.352 [2024-11-27 04:25:11.754588] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61961 ] 00:08:15.352 [2024-11-27 04:25:11.911283] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:15.614 [2024-11-27 04:25:11.991625] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:15.614 [2024-11-27 04:25:11.991706] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:16.187 Running I/O for 5 seconds... 00:08:20.113 517.00 IOPS, 32.31 MiB/s [2024-11-27T04:25:18.654Z] 1504.00 IOPS, 94.00 MiB/s [2024-11-27T04:25:18.654Z] 2383.67 IOPS, 148.98 MiB/s [2024-11-27T04:25:18.654Z] 2397.75 IOPS, 149.86 MiB/s 00:08:22.067 Latency(us) 00:08:22.067 [2024-11-27T04:25:18.654Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:22.067 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:22.067 Verification LBA range: start 0x0 length 0xbd0b 00:08:22.067 Nvme0n1 : 5.64 122.64 7.66 0.00 0.00 1003017.41 15022.87 1064707.94 00:08:22.067 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:22.067 Verification LBA range: start 0xbd0b length 0xbd0b 00:08:22.067 Nvme0n1 : 5.67 121.59 7.60 0.00 0.00 988697.81 13409.67 1135688.47 00:08:22.067 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:22.067 Verification LBA range: start 0x0 length 0x4ff8 00:08:22.067 Nvme1n1p1 : 5.78 128.96 8.06 0.00 0.00 935167.77 65737.65 916294.10 00:08:22.067 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:22.067 Verification LBA range: start 0x4ff8 length 0x4ff8 00:08:22.067 Nvme1n1p1 : 5.75 120.50 7.53 0.00 0.00 986875.41 69367.34 1626099.40 00:08:22.067 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:22.067 Verification LBA range: start 0x0 length 0x4ff7 00:08:22.067 Nvme1n1p2 : 5.94 80.77 5.05 0.00 0.00 1440575.38 115343.36 1948738.17 00:08:22.067 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:22.067 Verification LBA range: start 0x4ff7 length 0x4ff7 00:08:22.067 Nvme1n1p2 : 5.75 123.56 7.72 0.00 0.00 942861.89 81869.59 1645457.72 00:08:22.067 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:22.068 Verification LBA range: start 0x0 length 0x8000 00:08:22.068 Nvme2n1 : 5.79 130.44 8.15 0.00 0.00 879699.71 67350.84 1051802.39 00:08:22.068 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:22.068 Verification LBA range: start 0x8000 length 0x8000 00:08:22.068 Nvme2n1 : 5.87 134.96 8.44 0.00 0.00 839685.47 74206.92 1187310.67 00:08:22.068 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:22.068 Verification LBA range: start 0x0 length 0x8000 00:08:22.068 Nvme2n2 : 5.87 134.87 8.43 0.00 0.00 825887.69 72997.02 1006632.96 00:08:22.068 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:22.068 Verification LBA range: start 0x8000 length 0x8000 00:08:22.068 Nvme2n2 : 5.95 137.51 8.59 0.00 0.00 801824.15 50412.31 1729343.80 00:08:22.068 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:22.068 Verification LBA range: start 0x0 length 0x8000 00:08:22.068 Nvme2n3 : 5.94 146.02 9.13 0.00 0.00 745658.90 27625.94 1077613.49 00:08:22.068 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:22.068 Verification LBA range: start 0x8000 length 0x8000 00:08:22.068 Nvme2n3 : 5.97 145.98 9.12 0.00 0.00 736988.86 14922.04 1639004.95 00:08:22.068 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:22.068 Verification LBA range: start 0x0 length 0x2000 00:08:22.068 Nvme3n1 : 5.95 161.30 10.08 0.00 0.00 660848.51 894.82 1051802.39 00:08:22.068 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:22.068 Verification LBA range: start 0x2000 length 0x2000 00:08:22.068 Nvme3n1 : 6.03 186.12 11.63 0.00 0.00 564431.18 592.34 1555118.87 00:08:22.068 [2024-11-27T04:25:18.655Z] =================================================================================================================== 00:08:22.068 [2024-11-27T04:25:18.655Z] Total : 1875.21 117.20 0.00 0.00 847634.70 592.34 1948738.17 00:08:23.966 00:08:23.966 real 0m8.373s 00:08:23.966 user 0m15.869s 00:08:23.966 sys 0m0.231s 00:08:23.966 ************************************ 00:08:23.966 END TEST bdev_verify_big_io 00:08:23.966 ************************************ 00:08:23.966 04:25:20 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:23.966 04:25:20 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:08:23.966 04:25:20 blockdev_nvme_gpt -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:23.966 04:25:20 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:08:23.966 04:25:20 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:23.966 04:25:20 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:23.966 ************************************ 00:08:23.966 START TEST bdev_write_zeroes 00:08:23.966 ************************************ 00:08:23.966 04:25:20 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:23.966 [2024-11-27 04:25:20.174130] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:08:23.966 [2024-11-27 04:25:20.174247] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62071 ] 00:08:23.966 [2024-11-27 04:25:20.330469] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:23.966 [2024-11-27 04:25:20.407589] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:24.531 Running I/O for 1 seconds... 00:08:25.463 67648.00 IOPS, 264.25 MiB/s 00:08:25.463 Latency(us) 00:08:25.463 [2024-11-27T04:25:22.050Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:25.463 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:25.463 Nvme0n1 : 1.03 9615.60 37.56 0.00 0.00 13280.59 10889.06 25206.15 00:08:25.463 Job: Nvme1n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:25.463 Nvme1n1p1 : 1.03 9603.94 37.52 0.00 0.00 13280.51 10687.41 25407.80 00:08:25.463 Job: Nvme1n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:25.463 Nvme1n1p2 : 1.03 9592.32 37.47 0.00 0.00 13249.92 10687.41 24298.73 00:08:25.463 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:25.463 Nvme2n1 : 1.03 9581.53 37.43 0.00 0.00 13204.84 10284.11 23492.14 00:08:25.463 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:25.463 Nvme2n2 : 1.03 9570.70 37.39 0.00 0.00 13175.92 8418.86 22988.01 00:08:25.463 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:25.463 Nvme2n3 : 1.03 9560.03 37.34 0.00 0.00 13169.51 8015.56 23895.43 00:08:25.463 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:25.463 Nvme3n1 : 1.03 9549.37 37.30 0.00 0.00 13163.26 7813.91 25407.80 00:08:25.463 [2024-11-27T04:25:22.050Z] =================================================================================================================== 00:08:25.463 [2024-11-27T04:25:22.050Z] Total : 67073.48 262.01 0.00 0.00 13217.79 7813.91 25407.80 00:08:26.404 00:08:26.404 real 0m2.615s 00:08:26.404 user 0m2.331s 00:08:26.404 sys 0m0.172s 00:08:26.404 04:25:22 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:26.404 ************************************ 00:08:26.404 END TEST bdev_write_zeroes 00:08:26.404 ************************************ 00:08:26.404 04:25:22 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:08:26.404 04:25:22 blockdev_nvme_gpt -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:26.404 04:25:22 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:08:26.404 04:25:22 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:26.404 04:25:22 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:26.404 ************************************ 00:08:26.404 START TEST bdev_json_nonenclosed 00:08:26.404 ************************************ 00:08:26.405 04:25:22 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:26.405 [2024-11-27 04:25:22.852049] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:08:26.405 [2024-11-27 04:25:22.852166] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62119 ] 00:08:26.663 [2024-11-27 04:25:23.012081] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:26.663 [2024-11-27 04:25:23.111876] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:26.663 [2024-11-27 04:25:23.111960] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:08:26.663 [2024-11-27 04:25:23.111977] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:08:26.663 [2024-11-27 04:25:23.111986] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:26.921 00:08:26.921 real 0m0.504s 00:08:26.921 user 0m0.297s 00:08:26.921 sys 0m0.103s 00:08:26.921 04:25:23 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:26.921 ************************************ 00:08:26.921 END TEST bdev_json_nonenclosed 00:08:26.921 ************************************ 00:08:26.921 04:25:23 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:08:26.921 04:25:23 blockdev_nvme_gpt -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:26.921 04:25:23 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:08:26.921 04:25:23 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:26.921 04:25:23 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:26.921 ************************************ 00:08:26.921 START TEST bdev_json_nonarray 00:08:26.921 ************************************ 00:08:26.921 04:25:23 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:26.921 [2024-11-27 04:25:23.411529] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:08:26.921 [2024-11-27 04:25:23.411638] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62144 ] 00:08:27.179 [2024-11-27 04:25:23.572222] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:27.179 [2024-11-27 04:25:23.672262] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:27.179 [2024-11-27 04:25:23.672357] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:08:27.179 [2024-11-27 04:25:23.672374] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:08:27.179 [2024-11-27 04:25:23.672383] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:27.440 00:08:27.440 real 0m0.500s 00:08:27.440 user 0m0.304s 00:08:27.440 sys 0m0.091s 00:08:27.440 04:25:23 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:27.440 04:25:23 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:08:27.440 ************************************ 00:08:27.440 END TEST bdev_json_nonarray 00:08:27.440 ************************************ 00:08:27.440 04:25:23 blockdev_nvme_gpt -- bdev/blockdev.sh@824 -- # [[ gpt == bdev ]] 00:08:27.440 04:25:23 blockdev_nvme_gpt -- bdev/blockdev.sh@832 -- # [[ gpt == gpt ]] 00:08:27.440 04:25:23 blockdev_nvme_gpt -- bdev/blockdev.sh@833 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:08:27.440 04:25:23 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:27.440 04:25:23 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:27.440 04:25:23 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:27.440 ************************************ 00:08:27.440 START TEST bdev_gpt_uuid 00:08:27.440 ************************************ 00:08:27.440 04:25:23 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1129 -- # bdev_gpt_uuid 00:08:27.440 04:25:23 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@651 -- # local bdev 00:08:27.440 04:25:23 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@653 -- # start_spdk_tgt 00:08:27.440 04:25:23 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=62170 00:08:27.440 04:25:23 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:08:27.440 04:25:23 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@49 -- # waitforlisten 62170 00:08:27.440 04:25:23 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@835 -- # '[' -z 62170 ']' 00:08:27.440 04:25:23 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:27.440 04:25:23 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:27.440 04:25:23 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:08:27.440 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:27.440 04:25:23 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:27.440 04:25:23 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:27.440 04:25:23 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:08:27.440 [2024-11-27 04:25:23.964379] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:08:27.440 [2024-11-27 04:25:23.964858] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62170 ] 00:08:27.701 [2024-11-27 04:25:24.122797] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:27.701 [2024-11-27 04:25:24.223170] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:28.267 04:25:24 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:28.267 04:25:24 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@868 -- # return 0 00:08:28.267 04:25:24 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@655 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:08:28.267 04:25:24 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.267 04:25:24 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:08:28.836 Some configs were skipped because the RPC state that can call them passed over. 00:08:28.836 04:25:25 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.836 04:25:25 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@656 -- # rpc_cmd bdev_wait_for_examine 00:08:28.836 04:25:25 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.836 04:25:25 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:08:28.836 04:25:25 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.836 04:25:25 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@658 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:08:28.836 04:25:25 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.836 04:25:25 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:08:28.836 04:25:25 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.836 04:25:25 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@658 -- # bdev='[ 00:08:28.836 { 00:08:28.836 "name": "Nvme1n1p1", 00:08:28.836 "aliases": [ 00:08:28.836 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:08:28.836 ], 00:08:28.836 "product_name": "GPT Disk", 00:08:28.836 "block_size": 4096, 00:08:28.836 "num_blocks": 655104, 00:08:28.836 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:08:28.836 "assigned_rate_limits": { 00:08:28.836 "rw_ios_per_sec": 0, 00:08:28.836 "rw_mbytes_per_sec": 0, 00:08:28.836 "r_mbytes_per_sec": 0, 00:08:28.836 "w_mbytes_per_sec": 0 00:08:28.836 }, 00:08:28.836 "claimed": false, 00:08:28.836 "zoned": false, 00:08:28.836 "supported_io_types": { 00:08:28.836 "read": true, 00:08:28.836 "write": true, 00:08:28.836 "unmap": true, 00:08:28.836 "flush": true, 00:08:28.836 "reset": true, 00:08:28.836 "nvme_admin": false, 00:08:28.836 "nvme_io": false, 00:08:28.836 "nvme_io_md": false, 00:08:28.836 "write_zeroes": true, 00:08:28.836 "zcopy": false, 00:08:28.836 "get_zone_info": false, 00:08:28.836 "zone_management": false, 00:08:28.836 "zone_append": false, 00:08:28.836 "compare": true, 00:08:28.836 "compare_and_write": false, 00:08:28.836 "abort": true, 00:08:28.836 "seek_hole": false, 00:08:28.836 "seek_data": false, 00:08:28.836 "copy": true, 00:08:28.836 "nvme_iov_md": false 00:08:28.836 }, 00:08:28.836 "driver_specific": { 00:08:28.836 "gpt": { 00:08:28.836 "base_bdev": "Nvme1n1", 00:08:28.836 "offset_blocks": 256, 00:08:28.836 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:08:28.836 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:08:28.836 "partition_name": "SPDK_TEST_first" 00:08:28.836 } 00:08:28.836 } 00:08:28.836 } 00:08:28.836 ]' 00:08:28.836 04:25:25 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@659 -- # jq -r length 00:08:28.836 04:25:25 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@659 -- # [[ 1 == \1 ]] 00:08:28.837 04:25:25 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@660 -- # jq -r '.[0].aliases[0]' 00:08:28.837 04:25:25 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@660 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:08:28.837 04:25:25 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@661 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:08:28.837 04:25:25 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@661 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:08:28.837 04:25:25 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@663 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:08:28.837 04:25:25 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.837 04:25:25 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:08:28.837 04:25:25 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.837 04:25:25 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@663 -- # bdev='[ 00:08:28.837 { 00:08:28.837 "name": "Nvme1n1p2", 00:08:28.837 "aliases": [ 00:08:28.837 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:08:28.837 ], 00:08:28.837 "product_name": "GPT Disk", 00:08:28.837 "block_size": 4096, 00:08:28.837 "num_blocks": 655103, 00:08:28.837 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:08:28.837 "assigned_rate_limits": { 00:08:28.837 "rw_ios_per_sec": 0, 00:08:28.837 "rw_mbytes_per_sec": 0, 00:08:28.837 "r_mbytes_per_sec": 0, 00:08:28.837 "w_mbytes_per_sec": 0 00:08:28.837 }, 00:08:28.837 "claimed": false, 00:08:28.837 "zoned": false, 00:08:28.837 "supported_io_types": { 00:08:28.837 "read": true, 00:08:28.837 "write": true, 00:08:28.837 "unmap": true, 00:08:28.837 "flush": true, 00:08:28.837 "reset": true, 00:08:28.837 "nvme_admin": false, 00:08:28.837 "nvme_io": false, 00:08:28.837 "nvme_io_md": false, 00:08:28.837 "write_zeroes": true, 00:08:28.837 "zcopy": false, 00:08:28.837 "get_zone_info": false, 00:08:28.837 "zone_management": false, 00:08:28.837 "zone_append": false, 00:08:28.837 "compare": true, 00:08:28.837 "compare_and_write": false, 00:08:28.837 "abort": true, 00:08:28.837 "seek_hole": false, 00:08:28.837 "seek_data": false, 00:08:28.837 "copy": true, 00:08:28.837 "nvme_iov_md": false 00:08:28.837 }, 00:08:28.837 "driver_specific": { 00:08:28.837 "gpt": { 00:08:28.837 "base_bdev": "Nvme1n1", 00:08:28.837 "offset_blocks": 655360, 00:08:28.837 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:08:28.837 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:08:28.837 "partition_name": "SPDK_TEST_second" 00:08:28.837 } 00:08:28.837 } 00:08:28.837 } 00:08:28.837 ]' 00:08:28.837 04:25:25 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@664 -- # jq -r length 00:08:28.837 04:25:25 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@664 -- # [[ 1 == \1 ]] 00:08:28.837 04:25:25 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@665 -- # jq -r '.[0].aliases[0]' 00:08:28.837 04:25:25 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@665 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:08:28.837 04:25:25 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@666 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:08:28.837 04:25:25 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@666 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:08:28.837 04:25:25 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@668 -- # killprocess 62170 00:08:28.837 04:25:25 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@954 -- # '[' -z 62170 ']' 00:08:28.837 04:25:25 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@958 -- # kill -0 62170 00:08:28.837 04:25:25 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@959 -- # uname 00:08:28.837 04:25:25 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:28.837 04:25:25 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62170 00:08:28.837 04:25:25 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:28.837 killing process with pid 62170 00:08:28.837 04:25:25 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:28.837 04:25:25 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62170' 00:08:28.837 04:25:25 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@973 -- # kill 62170 00:08:28.837 04:25:25 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@978 -- # wait 62170 00:08:30.785 00:08:30.785 real 0m3.011s 00:08:30.785 user 0m3.164s 00:08:30.785 sys 0m0.343s 00:08:30.785 04:25:26 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:30.785 ************************************ 00:08:30.785 END TEST bdev_gpt_uuid 00:08:30.785 ************************************ 00:08:30.785 04:25:26 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:08:30.785 04:25:26 blockdev_nvme_gpt -- bdev/blockdev.sh@836 -- # [[ gpt == crypto_sw ]] 00:08:30.785 04:25:26 blockdev_nvme_gpt -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:08:30.785 04:25:26 blockdev_nvme_gpt -- bdev/blockdev.sh@849 -- # cleanup 00:08:30.785 04:25:26 blockdev_nvme_gpt -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:08:30.785 04:25:26 blockdev_nvme_gpt -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:08:30.785 04:25:26 blockdev_nvme_gpt -- bdev/blockdev.sh@26 -- # [[ gpt == rbd ]] 00:08:30.785 04:25:26 blockdev_nvme_gpt -- bdev/blockdev.sh@30 -- # [[ gpt == daos ]] 00:08:30.785 04:25:26 blockdev_nvme_gpt -- bdev/blockdev.sh@34 -- # [[ gpt = \g\p\t ]] 00:08:30.785 04:25:26 blockdev_nvme_gpt -- bdev/blockdev.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:08:30.785 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:31.046 Waiting for block devices as requested 00:08:31.046 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:08:31.046 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:08:31.046 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:08:31.307 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:08:36.620 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:08:36.620 04:25:32 blockdev_nvme_gpt -- bdev/blockdev.sh@36 -- # [[ -b /dev/nvme0n1 ]] 00:08:36.620 04:25:32 blockdev_nvme_gpt -- bdev/blockdev.sh@37 -- # wipefs --all /dev/nvme0n1 00:08:36.620 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:08:36.620 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:08:36.620 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:08:36.620 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:08:36.620 04:25:33 blockdev_nvme_gpt -- bdev/blockdev.sh@40 -- # [[ gpt == xnvme ]] 00:08:36.620 00:08:36.620 real 0m55.518s 00:08:36.620 user 1m10.081s 00:08:36.620 sys 0m7.933s 00:08:36.620 04:25:33 blockdev_nvme_gpt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:36.620 04:25:33 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:36.620 ************************************ 00:08:36.620 END TEST blockdev_nvme_gpt 00:08:36.620 ************************************ 00:08:36.620 04:25:33 -- spdk/autotest.sh@212 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:08:36.620 04:25:33 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:36.620 04:25:33 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:36.620 04:25:33 -- common/autotest_common.sh@10 -- # set +x 00:08:36.620 ************************************ 00:08:36.620 START TEST nvme 00:08:36.620 ************************************ 00:08:36.620 04:25:33 nvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:08:36.620 * Looking for test storage... 00:08:36.620 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:08:36.620 04:25:33 nvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:36.620 04:25:33 nvme -- common/autotest_common.sh@1693 -- # lcov --version 00:08:36.620 04:25:33 nvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:36.879 04:25:33 nvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:36.879 04:25:33 nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:36.879 04:25:33 nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:36.879 04:25:33 nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:36.879 04:25:33 nvme -- scripts/common.sh@336 -- # IFS=.-: 00:08:36.879 04:25:33 nvme -- scripts/common.sh@336 -- # read -ra ver1 00:08:36.879 04:25:33 nvme -- scripts/common.sh@337 -- # IFS=.-: 00:08:36.879 04:25:33 nvme -- scripts/common.sh@337 -- # read -ra ver2 00:08:36.879 04:25:33 nvme -- scripts/common.sh@338 -- # local 'op=<' 00:08:36.879 04:25:33 nvme -- scripts/common.sh@340 -- # ver1_l=2 00:08:36.879 04:25:33 nvme -- scripts/common.sh@341 -- # ver2_l=1 00:08:36.879 04:25:33 nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:36.879 04:25:33 nvme -- scripts/common.sh@344 -- # case "$op" in 00:08:36.879 04:25:33 nvme -- scripts/common.sh@345 -- # : 1 00:08:36.879 04:25:33 nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:36.879 04:25:33 nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:36.879 04:25:33 nvme -- scripts/common.sh@365 -- # decimal 1 00:08:36.879 04:25:33 nvme -- scripts/common.sh@353 -- # local d=1 00:08:36.879 04:25:33 nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:36.879 04:25:33 nvme -- scripts/common.sh@355 -- # echo 1 00:08:36.879 04:25:33 nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:08:36.879 04:25:33 nvme -- scripts/common.sh@366 -- # decimal 2 00:08:36.879 04:25:33 nvme -- scripts/common.sh@353 -- # local d=2 00:08:36.879 04:25:33 nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:36.879 04:25:33 nvme -- scripts/common.sh@355 -- # echo 2 00:08:36.879 04:25:33 nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:08:36.879 04:25:33 nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:36.879 04:25:33 nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:36.879 04:25:33 nvme -- scripts/common.sh@368 -- # return 0 00:08:36.879 04:25:33 nvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:36.879 04:25:33 nvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:36.879 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:36.879 --rc genhtml_branch_coverage=1 00:08:36.879 --rc genhtml_function_coverage=1 00:08:36.879 --rc genhtml_legend=1 00:08:36.879 --rc geninfo_all_blocks=1 00:08:36.879 --rc geninfo_unexecuted_blocks=1 00:08:36.879 00:08:36.879 ' 00:08:36.879 04:25:33 nvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:36.879 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:36.879 --rc genhtml_branch_coverage=1 00:08:36.879 --rc genhtml_function_coverage=1 00:08:36.879 --rc genhtml_legend=1 00:08:36.879 --rc geninfo_all_blocks=1 00:08:36.879 --rc geninfo_unexecuted_blocks=1 00:08:36.879 00:08:36.879 ' 00:08:36.879 04:25:33 nvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:36.879 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:36.879 --rc genhtml_branch_coverage=1 00:08:36.879 --rc genhtml_function_coverage=1 00:08:36.879 --rc genhtml_legend=1 00:08:36.879 --rc geninfo_all_blocks=1 00:08:36.879 --rc geninfo_unexecuted_blocks=1 00:08:36.879 00:08:36.879 ' 00:08:36.879 04:25:33 nvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:36.879 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:36.879 --rc genhtml_branch_coverage=1 00:08:36.879 --rc genhtml_function_coverage=1 00:08:36.879 --rc genhtml_legend=1 00:08:36.879 --rc geninfo_all_blocks=1 00:08:36.879 --rc geninfo_unexecuted_blocks=1 00:08:36.879 00:08:36.879 ' 00:08:36.879 04:25:33 nvme -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:08:37.138 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:37.710 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:08:37.710 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:08:37.973 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:08:37.973 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:08:37.973 04:25:34 nvme -- nvme/nvme.sh@79 -- # uname 00:08:37.973 04:25:34 nvme -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:08:37.973 04:25:34 nvme -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:08:37.973 04:25:34 nvme -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:08:37.973 04:25:34 nvme -- common/autotest_common.sh@1086 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:08:37.973 04:25:34 nvme -- common/autotest_common.sh@1072 -- # _randomize_va_space=2 00:08:37.973 04:25:34 nvme -- common/autotest_common.sh@1073 -- # echo 0 00:08:37.973 Waiting for stub to ready for secondary processes... 00:08:37.973 04:25:34 nvme -- common/autotest_common.sh@1075 -- # stubpid=62806 00:08:37.973 04:25:34 nvme -- common/autotest_common.sh@1076 -- # echo Waiting for stub to ready for secondary processes... 00:08:37.973 04:25:34 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:08:37.973 04:25:34 nvme -- common/autotest_common.sh@1074 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:08:37.973 04:25:34 nvme -- common/autotest_common.sh@1079 -- # [[ -e /proc/62806 ]] 00:08:37.973 04:25:34 nvme -- common/autotest_common.sh@1080 -- # sleep 1s 00:08:37.973 [2024-11-27 04:25:34.451033] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:08:37.973 [2024-11-27 04:25:34.451185] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto --proc-type=primary ] 00:08:38.920 [2024-11-27 04:25:35.386633] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:38.920 04:25:35 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:08:38.920 04:25:35 nvme -- common/autotest_common.sh@1079 -- # [[ -e /proc/62806 ]] 00:08:38.920 04:25:35 nvme -- common/autotest_common.sh@1080 -- # sleep 1s 00:08:39.181 [2024-11-27 04:25:35.505899] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:39.181 [2024-11-27 04:25:35.506243] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:39.181 [2024-11-27 04:25:35.506328] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:39.181 [2024-11-27 04:25:35.523417] nvme_cuse.c:1408:start_cuse_thread: *NOTICE*: Successfully started cuse thread to poll for admin commands 00:08:39.181 [2024-11-27 04:25:35.523470] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:08:39.181 [2024-11-27 04:25:35.538397] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:08:39.181 [2024-11-27 04:25:35.538513] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:08:39.181 [2024-11-27 04:25:35.542181] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:08:39.181 [2024-11-27 04:25:35.542554] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1 created 00:08:39.181 [2024-11-27 04:25:35.542697] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1n1 created 00:08:39.181 [2024-11-27 04:25:35.547178] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:08:39.181 [2024-11-27 04:25:35.547634] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2 created 00:08:39.181 [2024-11-27 04:25:35.547850] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2n1 created 00:08:39.181 [2024-11-27 04:25:35.550712] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:08:39.182 [2024-11-27 04:25:35.550969] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3 created 00:08:39.182 [2024-11-27 04:25:35.551041] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n1 created 00:08:39.182 [2024-11-27 04:25:35.551090] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n2 created 00:08:39.182 [2024-11-27 04:25:35.551130] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n3 created 00:08:40.127 done. 00:08:40.127 04:25:36 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:08:40.127 04:25:36 nvme -- common/autotest_common.sh@1082 -- # echo done. 00:08:40.127 04:25:36 nvme -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:08:40.127 04:25:36 nvme -- common/autotest_common.sh@1105 -- # '[' 10 -le 1 ']' 00:08:40.127 04:25:36 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:40.127 04:25:36 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:40.127 ************************************ 00:08:40.127 START TEST nvme_reset 00:08:40.127 ************************************ 00:08:40.127 04:25:36 nvme.nvme_reset -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:08:40.128 Initializing NVMe Controllers 00:08:40.128 Skipping QEMU NVMe SSD at 0000:00:13.0 00:08:40.128 Skipping QEMU NVMe SSD at 0000:00:10.0 00:08:40.128 Skipping QEMU NVMe SSD at 0000:00:11.0 00:08:40.128 Skipping QEMU NVMe SSD at 0000:00:12.0 00:08:40.128 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:08:40.128 00:08:40.128 real 0m0.242s 00:08:40.128 user 0m0.074s 00:08:40.128 sys 0m0.114s 00:08:40.128 04:25:36 nvme.nvme_reset -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:40.128 ************************************ 00:08:40.128 END TEST nvme_reset 00:08:40.128 ************************************ 00:08:40.128 04:25:36 nvme.nvme_reset -- common/autotest_common.sh@10 -- # set +x 00:08:40.389 04:25:36 nvme -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:08:40.389 04:25:36 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:40.389 04:25:36 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:40.389 04:25:36 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:40.389 ************************************ 00:08:40.389 START TEST nvme_identify 00:08:40.390 ************************************ 00:08:40.390 04:25:36 nvme.nvme_identify -- common/autotest_common.sh@1129 -- # nvme_identify 00:08:40.390 04:25:36 nvme.nvme_identify -- nvme/nvme.sh@12 -- # bdfs=() 00:08:40.390 04:25:36 nvme.nvme_identify -- nvme/nvme.sh@12 -- # local bdfs bdf 00:08:40.390 04:25:36 nvme.nvme_identify -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:08:40.390 04:25:36 nvme.nvme_identify -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:08:40.390 04:25:36 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # bdfs=() 00:08:40.390 04:25:36 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # local bdfs 00:08:40.390 04:25:36 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:40.390 04:25:36 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:08:40.390 04:25:36 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:40.390 04:25:36 nvme.nvme_identify -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:08:40.390 04:25:36 nvme.nvme_identify -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:08:40.390 04:25:36 nvme.nvme_identify -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:08:40.655 [2024-11-27 04:25:37.014525] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:13.0, 0] process 62839 terminated unexpected 00:08:40.655 ===================================================== 00:08:40.655 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:40.655 ===================================================== 00:08:40.655 Controller Capabilities/Features 00:08:40.655 ================================ 00:08:40.655 Vendor ID: 1b36 00:08:40.655 Subsystem Vendor ID: 1af4 00:08:40.655 Serial Number: 12343 00:08:40.655 Model Number: QEMU NVMe Ctrl 00:08:40.655 Firmware Version: 8.0.0 00:08:40.655 Recommended Arb Burst: 6 00:08:40.655 IEEE OUI Identifier: 00 54 52 00:08:40.655 Multi-path I/O 00:08:40.655 May have multiple subsystem ports: No 00:08:40.655 May have multiple controllers: Yes 00:08:40.655 Associated with SR-IOV VF: No 00:08:40.655 Max Data Transfer Size: 524288 00:08:40.655 Max Number of Namespaces: 256 00:08:40.655 Max Number of I/O Queues: 64 00:08:40.655 NVMe Specification Version (VS): 1.4 00:08:40.655 NVMe Specification Version (Identify): 1.4 00:08:40.655 Maximum Queue Entries: 2048 00:08:40.655 Contiguous Queues Required: Yes 00:08:40.655 Arbitration Mechanisms Supported 00:08:40.655 Weighted Round Robin: Not Supported 00:08:40.655 Vendor Specific: Not Supported 00:08:40.655 Reset Timeout: 7500 ms 00:08:40.655 Doorbell Stride: 4 bytes 00:08:40.655 NVM Subsystem Reset: Not Supported 00:08:40.655 Command Sets Supported 00:08:40.655 NVM Command Set: Supported 00:08:40.655 Boot Partition: Not Supported 00:08:40.655 Memory Page Size Minimum: 4096 bytes 00:08:40.655 Memory Page Size Maximum: 65536 bytes 00:08:40.655 Persistent Memory Region: Not Supported 00:08:40.655 Optional Asynchronous Events Supported 00:08:40.655 Namespace Attribute Notices: Supported 00:08:40.655 Firmware Activation Notices: Not Supported 00:08:40.655 ANA Change Notices: Not Supported 00:08:40.655 PLE Aggregate Log Change Notices: Not Supported 00:08:40.655 LBA Status Info Alert Notices: Not Supported 00:08:40.655 EGE Aggregate Log Change Notices: Not Supported 00:08:40.656 Normal NVM Subsystem Shutdown event: Not Supported 00:08:40.656 Zone Descriptor Change Notices: Not Supported 00:08:40.656 Discovery Log Change Notices: Not Supported 00:08:40.656 Controller Attributes 00:08:40.656 128-bit Host Identifier: Not Supported 00:08:40.656 Non-Operational Permissive Mode: Not Supported 00:08:40.656 NVM Sets: Not Supported 00:08:40.656 Read Recovery Levels: Not Supported 00:08:40.656 Endurance Groups: Supported 00:08:40.656 Predictable Latency Mode: Not Supported 00:08:40.656 Traffic Based Keep ALive: Not Supported 00:08:40.656 Namespace Granularity: Not Supported 00:08:40.656 SQ Associations: Not Supported 00:08:40.656 UUID List: Not Supported 00:08:40.656 Multi-Domain Subsystem: Not Supported 00:08:40.656 Fixed Capacity Management: Not Supported 00:08:40.656 Variable Capacity Management: Not Supported 00:08:40.656 Delete Endurance Group: Not Supported 00:08:40.656 Delete NVM Set: Not Supported 00:08:40.656 Extended LBA Formats Supported: Supported 00:08:40.656 Flexible Data Placement Supported: Supported 00:08:40.656 00:08:40.656 Controller Memory Buffer Support 00:08:40.656 ================================ 00:08:40.656 Supported: No 00:08:40.656 00:08:40.656 Persistent Memory Region Support 00:08:40.656 ================================ 00:08:40.656 Supported: No 00:08:40.656 00:08:40.656 Admin Command Set Attributes 00:08:40.656 ============================ 00:08:40.656 Security Send/Receive: Not Supported 00:08:40.656 Format NVM: Supported 00:08:40.656 Firmware Activate/Download: Not Supported 00:08:40.656 Namespace Management: Supported 00:08:40.656 Device Self-Test: Not Supported 00:08:40.656 Directives: Supported 00:08:40.656 NVMe-MI: Not Supported 00:08:40.656 Virtualization Management: Not Supported 00:08:40.656 Doorbell Buffer Config: Supported 00:08:40.656 Get LBA Status Capability: Not Supported 00:08:40.656 Command & Feature Lockdown Capability: Not Supported 00:08:40.656 Abort Command Limit: 4 00:08:40.656 Async Event Request Limit: 4 00:08:40.656 Number of Firmware Slots: N/A 00:08:40.656 Firmware Slot 1 Read-Only: N/A 00:08:40.656 Firmware Activation Without Reset: N/A 00:08:40.656 Multiple Update Detection Support: N/A 00:08:40.656 Firmware Update Granularity: No Information Provided 00:08:40.656 Per-Namespace SMART Log: Yes 00:08:40.656 Asymmetric Namespace Access Log Page: Not Supported 00:08:40.656 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:08:40.656 Command Effects Log Page: Supported 00:08:40.656 Get Log Page Extended Data: Supported 00:08:40.656 Telemetry Log Pages: Not Supported 00:08:40.656 Persistent Event Log Pages: Not Supported 00:08:40.656 Supported Log Pages Log Page: May Support 00:08:40.656 Commands Supported & Effects Log Page: Not Supported 00:08:40.656 Feature Identifiers & Effects Log Page:May Support 00:08:40.656 NVMe-MI Commands & Effects Log Page: May Support 00:08:40.656 Data Area 4 for Telemetry Log: Not Supported 00:08:40.656 Error Log Page Entries Supported: 1 00:08:40.656 Keep Alive: Not Supported 00:08:40.656 00:08:40.656 NVM Command Set Attributes 00:08:40.656 ========================== 00:08:40.656 Submission Queue Entry Size 00:08:40.656 Max: 64 00:08:40.656 Min: 64 00:08:40.656 Completion Queue Entry Size 00:08:40.656 Max: 16 00:08:40.656 Min: 16 00:08:40.656 Number of Namespaces: 256 00:08:40.656 Compare Command: Supported 00:08:40.656 Write Uncorrectable Command: Not Supported 00:08:40.656 Dataset Management Command: Supported 00:08:40.656 Write Zeroes Command: Supported 00:08:40.656 Set Features Save Field: Supported 00:08:40.656 Reservations: Not Supported 00:08:40.656 Timestamp: Supported 00:08:40.656 Copy: Supported 00:08:40.656 Volatile Write Cache: Present 00:08:40.656 Atomic Write Unit (Normal): 1 00:08:40.656 Atomic Write Unit (PFail): 1 00:08:40.656 Atomic Compare & Write Unit: 1 00:08:40.656 Fused Compare & Write: Not Supported 00:08:40.656 Scatter-Gather List 00:08:40.656 SGL Command Set: Supported 00:08:40.656 SGL Keyed: Not Supported 00:08:40.656 SGL Bit Bucket Descriptor: Not Supported 00:08:40.656 SGL Metadata Pointer: Not Supported 00:08:40.656 Oversized SGL: Not Supported 00:08:40.656 SGL Metadata Address: Not Supported 00:08:40.656 SGL Offset: Not Supported 00:08:40.656 Transport SGL Data Block: Not Supported 00:08:40.656 Replay Protected Memory Block: Not Supported 00:08:40.656 00:08:40.656 Firmware Slot Information 00:08:40.656 ========================= 00:08:40.656 Active slot: 1 00:08:40.656 Slot 1 Firmware Revision: 1.0 00:08:40.656 00:08:40.656 00:08:40.656 Commands Supported and Effects 00:08:40.656 ============================== 00:08:40.656 Admin Commands 00:08:40.656 -------------- 00:08:40.656 Delete I/O Submission Queue (00h): Supported 00:08:40.656 Create I/O Submission Queue (01h): Supported 00:08:40.656 Get Log Page (02h): Supported 00:08:40.656 Delete I/O Completion Queue (04h): Supported 00:08:40.656 Create I/O Completion Queue (05h): Supported 00:08:40.656 Identify (06h): Supported 00:08:40.656 Abort (08h): Supported 00:08:40.656 Set Features (09h): Supported 00:08:40.656 Get Features (0Ah): Supported 00:08:40.656 Asynchronous Event Request (0Ch): Supported 00:08:40.656 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:40.656 Directive Send (19h): Supported 00:08:40.656 Directive Receive (1Ah): Supported 00:08:40.656 Virtualization Management (1Ch): Supported 00:08:40.656 Doorbell Buffer Config (7Ch): Supported 00:08:40.656 Format NVM (80h): Supported LBA-Change 00:08:40.656 I/O Commands 00:08:40.656 ------------ 00:08:40.656 Flush (00h): Supported LBA-Change 00:08:40.656 Write (01h): Supported LBA-Change 00:08:40.656 Read (02h): Supported 00:08:40.656 Compare (05h): Supported 00:08:40.656 Write Zeroes (08h): Supported LBA-Change 00:08:40.656 Dataset Management (09h): Supported LBA-Change 00:08:40.656 Unknown (0Ch): Supported 00:08:40.656 Unknown (12h): Supported 00:08:40.656 Copy (19h): Supported LBA-Change 00:08:40.656 Unknown (1Dh): Supported LBA-Change 00:08:40.656 00:08:40.656 Error Log 00:08:40.656 ========= 00:08:40.656 00:08:40.656 Arbitration 00:08:40.656 =========== 00:08:40.656 Arbitration Burst: no limit 00:08:40.656 00:08:40.656 Power Management 00:08:40.656 ================ 00:08:40.656 Number of Power States: 1 00:08:40.656 Current Power State: Power State #0 00:08:40.656 Power State #0: 00:08:40.656 Max Power: 25.00 W 00:08:40.656 Non-Operational State: Operational 00:08:40.656 Entry Latency: 16 microseconds 00:08:40.656 Exit Latency: 4 microseconds 00:08:40.656 Relative Read Throughput: 0 00:08:40.656 Relative Read Latency: 0 00:08:40.656 Relative Write Throughput: 0 00:08:40.656 Relative Write Latency: 0 00:08:40.656 Idle Power: Not Reported 00:08:40.656 Active Power: Not Reported 00:08:40.656 Non-Operational Permissive Mode: Not Supported 00:08:40.656 00:08:40.656 Health Information 00:08:40.656 ================== 00:08:40.656 Critical Warnings: 00:08:40.656 Available Spare Space: OK 00:08:40.656 Temperature: OK 00:08:40.656 Device Reliability: OK 00:08:40.656 Read Only: No 00:08:40.656 Volatile Memory Backup: OK 00:08:40.656 Current Temperature: 323 Kelvin (50 Celsius) 00:08:40.656 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:40.656 Available Spare: 0% 00:08:40.656 Available Spare Threshold: 0% 00:08:40.656 Life Percentage Used: 0% 00:08:40.656 Data Units Read: 853 00:08:40.656 Data Units Written: 782 00:08:40.656 Host Read Commands: 39079 00:08:40.656 Host Write Commands: 38503 00:08:40.656 Controller Busy Time: 0 minutes 00:08:40.656 Power Cycles: 0 00:08:40.656 Power On Hours: 0 hours 00:08:40.656 Unsafe Shutdowns: 0 00:08:40.656 Unrecoverable Media Errors: 0 00:08:40.656 Lifetime Error Log Entries: 0 00:08:40.656 Warning Temperature Time: 0 minutes 00:08:40.656 Critical Temperature Time: 0 minutes 00:08:40.656 00:08:40.656 Number of Queues 00:08:40.656 ================ 00:08:40.656 Number of I/O Submission Queues: 64 00:08:40.656 Number of I/O Completion Queues: 64 00:08:40.656 00:08:40.656 ZNS Specific Controller Data 00:08:40.656 ============================ 00:08:40.656 Zone Append Size Limit: 0 00:08:40.656 00:08:40.656 00:08:40.656 Active Namespaces 00:08:40.656 ================= 00:08:40.656 Namespace ID:1 00:08:40.656 Error Recovery Timeout: Unlimited 00:08:40.656 Command Set Identifier: NVM (00h) 00:08:40.656 Deallocate: Supported 00:08:40.656 Deallocated/Unwritten Error: Supported 00:08:40.656 Deallocated Read Value: All 0x00 00:08:40.656 Deallocate in Write Zeroes: Not Supported 00:08:40.656 Deallocated Guard Field: 0xFFFF 00:08:40.656 Flush: Supported 00:08:40.656 Reservation: Not Supported 00:08:40.657 Namespace Sharing Capabilities: Multiple Controllers 00:08:40.657 Size (in LBAs): 262144 (1GiB) 00:08:40.657 Capacity (in LBAs): 262144 (1GiB) 00:08:40.657 Utilization (in LBAs): 262144 (1GiB) 00:08:40.657 Thin Provisioning: Not Supported 00:08:40.657 Per-NS Atomic Units: No 00:08:40.657 Maximum Single Source Range Length: 128 00:08:40.657 Maximum Copy Length: 128 00:08:40.657 Maximum Source Range Count: 128 00:08:40.657 NGUID/EUI64 Never Reused: No 00:08:40.657 Namespace Write Protected: No 00:08:40.657 Endurance group ID: 1 00:08:40.657 Number of LBA Formats: 8 00:08:40.657 Current LBA Format: LBA Format #04 00:08:40.657 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:40.657 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:40.657 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:40.657 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:40.657 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:40.657 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:40.657 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:40.657 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:40.657 00:08:40.657 Get Feature FDP: 00:08:40.657 ================ 00:08:40.657 Enabled: Yes 00:08:40.657 FDP configuration index: 0 00:08:40.657 00:08:40.657 FDP configurations log page 00:08:40.657 =========================== 00:08:40.657 Number of FDP configurations: 1 00:08:40.657 Version: 0 00:08:40.657 Size: 112 00:08:40.657 FDP Configuration Descriptor: 0 00:08:40.657 Descriptor Size: 96 00:08:40.657 Reclaim Group Identifier format: 2 00:08:40.657 FDP Volatile Write Cache: Not Present 00:08:40.657 FDP Configuration: Valid 00:08:40.657 Vendor Specific Size: 0 00:08:40.657 Number of Reclaim Groups: 2 00:08:40.657 Number of Recalim Unit Handles: 8 00:08:40.657 Max Placement Identifiers: 128 00:08:40.657 Number of Namespaces Suppprted: 256 00:08:40.657 Reclaim unit Nominal Size: 6000000 bytes 00:08:40.657 Estimated Reclaim Unit Time Limit: Not Reported 00:08:40.657 RUH Desc #000: RUH Type: Initially Isolated 00:08:40.657 RUH Desc #001: RUH Type: Initially Isolated 00:08:40.657 RUH Desc #002: RUH Type: Initially Isolated 00:08:40.657 RUH Desc #003: RUH Type: Initially Isolated 00:08:40.657 RUH Desc #004: RUH Type: Initially Isolated 00:08:40.657 RUH Desc #005: RUH Type: Initially Isolated 00:08:40.657 RUH Desc #006: RUH Type: Initially Isolated 00:08:40.657 RUH Desc #007: RUH Type: Initially Isolated 00:08:40.657 00:08:40.657 FDP reclaim unit handle usage log page 00:08:40.657 ==================================[2024-11-27 04:25:37.019022] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:10.0, 0] process 62839 terminated unexpected 00:08:40.657 ==== 00:08:40.657 Number of Reclaim Unit Handles: 8 00:08:40.657 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:08:40.657 RUH Usage Desc #001: RUH Attributes: Unused 00:08:40.657 RUH Usage Desc #002: RUH Attributes: Unused 00:08:40.657 RUH Usage Desc #003: RUH Attributes: Unused 00:08:40.657 RUH Usage Desc #004: RUH Attributes: Unused 00:08:40.657 RUH Usage Desc #005: RUH Attributes: Unused 00:08:40.657 RUH Usage Desc #006: RUH Attributes: Unused 00:08:40.657 RUH Usage Desc #007: RUH Attributes: Unused 00:08:40.657 00:08:40.657 FDP statistics log page 00:08:40.657 ======================= 00:08:40.657 Host bytes with metadata written: 499359744 00:08:40.657 Media bytes with metadata written: 499412992 00:08:40.657 Media bytes erased: 0 00:08:40.657 00:08:40.657 FDP events log page 00:08:40.657 =================== 00:08:40.657 Number of FDP events: 0 00:08:40.657 00:08:40.657 NVM Specific Namespace Data 00:08:40.657 =========================== 00:08:40.657 Logical Block Storage Tag Mask: 0 00:08:40.657 Protection Information Capabilities: 00:08:40.657 16b Guard Protection Information Storage Tag Support: No 00:08:40.657 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:40.657 Storage Tag Check Read Support: No 00:08:40.657 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:40.657 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:40.657 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:40.657 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:40.657 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:40.657 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:40.657 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:40.657 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:40.657 ===================================================== 00:08:40.657 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:40.657 ===================================================== 00:08:40.657 Controller Capabilities/Features 00:08:40.657 ================================ 00:08:40.657 Vendor ID: 1b36 00:08:40.657 Subsystem Vendor ID: 1af4 00:08:40.657 Serial Number: 12340 00:08:40.657 Model Number: QEMU NVMe Ctrl 00:08:40.657 Firmware Version: 8.0.0 00:08:40.657 Recommended Arb Burst: 6 00:08:40.657 IEEE OUI Identifier: 00 54 52 00:08:40.657 Multi-path I/O 00:08:40.657 May have multiple subsystem ports: No 00:08:40.657 May have multiple controllers: No 00:08:40.657 Associated with SR-IOV VF: No 00:08:40.657 Max Data Transfer Size: 524288 00:08:40.657 Max Number of Namespaces: 256 00:08:40.657 Max Number of I/O Queues: 64 00:08:40.657 NVMe Specification Version (VS): 1.4 00:08:40.657 NVMe Specification Version (Identify): 1.4 00:08:40.657 Maximum Queue Entries: 2048 00:08:40.657 Contiguous Queues Required: Yes 00:08:40.657 Arbitration Mechanisms Supported 00:08:40.657 Weighted Round Robin: Not Supported 00:08:40.657 Vendor Specific: Not Supported 00:08:40.657 Reset Timeout: 7500 ms 00:08:40.657 Doorbell Stride: 4 bytes 00:08:40.657 NVM Subsystem Reset: Not Supported 00:08:40.657 Command Sets Supported 00:08:40.657 NVM Command Set: Supported 00:08:40.657 Boot Partition: Not Supported 00:08:40.657 Memory Page Size Minimum: 4096 bytes 00:08:40.657 Memory Page Size Maximum: 65536 bytes 00:08:40.657 Persistent Memory Region: Not Supported 00:08:40.657 Optional Asynchronous Events Supported 00:08:40.657 Namespace Attribute Notices: Supported 00:08:40.657 Firmware Activation Notices: Not Supported 00:08:40.657 ANA Change Notices: Not Supported 00:08:40.657 PLE Aggregate Log Change Notices: Not Supported 00:08:40.657 LBA Status Info Alert Notices: Not Supported 00:08:40.657 EGE Aggregate Log Change Notices: Not Supported 00:08:40.657 Normal NVM Subsystem Shutdown event: Not Supported 00:08:40.657 Zone Descriptor Change Notices: Not Supported 00:08:40.657 Discovery Log Change Notices: Not Supported 00:08:40.657 Controller Attributes 00:08:40.657 128-bit Host Identifier: Not Supported 00:08:40.657 Non-Operational Permissive Mode: Not Supported 00:08:40.657 NVM Sets: Not Supported 00:08:40.657 Read Recovery Levels: Not Supported 00:08:40.657 Endurance Groups: Not Supported 00:08:40.657 Predictable Latency Mode: Not Supported 00:08:40.657 Traffic Based Keep ALive: Not Supported 00:08:40.657 Namespace Granularity: Not Supported 00:08:40.657 SQ Associations: Not Supported 00:08:40.657 UUID List: Not Supported 00:08:40.657 Multi-Domain Subsystem: Not Supported 00:08:40.657 Fixed Capacity Management: Not Supported 00:08:40.657 Variable Capacity Management: Not Supported 00:08:40.657 Delete Endurance Group: Not Supported 00:08:40.657 Delete NVM Set: Not Supported 00:08:40.657 Extended LBA Formats Supported: Supported 00:08:40.657 Flexible Data Placement Supported: Not Supported 00:08:40.657 00:08:40.657 Controller Memory Buffer Support 00:08:40.657 ================================ 00:08:40.657 Supported: No 00:08:40.657 00:08:40.657 Persistent Memory Region Support 00:08:40.657 ================================ 00:08:40.657 Supported: No 00:08:40.657 00:08:40.657 Admin Command Set Attributes 00:08:40.657 ============================ 00:08:40.657 Security Send/Receive: Not Supported 00:08:40.657 Format NVM: Supported 00:08:40.657 Firmware Activate/Download: Not Supported 00:08:40.657 Namespace Management: Supported 00:08:40.657 Device Self-Test: Not Supported 00:08:40.657 Directives: Supported 00:08:40.657 NVMe-MI: Not Supported 00:08:40.657 Virtualization Management: Not Supported 00:08:40.657 Doorbell Buffer Config: Supported 00:08:40.657 Get LBA Status Capability: Not Supported 00:08:40.657 Command & Feature Lockdown Capability: Not Supported 00:08:40.657 Abort Command Limit: 4 00:08:40.657 Async Event Request Limit: 4 00:08:40.657 Number of Firmware Slots: N/A 00:08:40.657 Firmware Slot 1 Read-Only: N/A 00:08:40.657 Firmware Activation Without Reset: N/A 00:08:40.657 Multiple Update Detection Support: N/A 00:08:40.657 Firmware Update Granularity: No Information Provided 00:08:40.657 Per-Namespace SMART Log: Yes 00:08:40.657 Asymmetric Namespace Access Log Page: Not Supported 00:08:40.657 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:08:40.657 Command Effects Log Page: Supported 00:08:40.657 Get Log Page Extended Data: Supported 00:08:40.657 Telemetry Log Pages: Not Supported 00:08:40.658 Persistent Event Log Pages: Not Supported 00:08:40.658 Supported Log Pages Log Page: May Support 00:08:40.658 Commands Supported & Effects Log Page: Not Supported 00:08:40.658 Feature Identifiers & Effects Log Page:May Support 00:08:40.658 NVMe-MI Commands & Effects Log Page: May Support 00:08:40.658 Data Area 4 for Telemetry Log: Not Supported 00:08:40.658 Error Log Page Entries Supported: 1 00:08:40.658 Keep Alive: Not Supported 00:08:40.658 00:08:40.658 NVM Command Set Attributes 00:08:40.658 ========================== 00:08:40.658 Submission Queue Entry Size 00:08:40.658 Max: 64 00:08:40.658 Min: 64 00:08:40.658 Completion Queue Entry Size 00:08:40.658 Max: 16 00:08:40.658 Min: 16 00:08:40.658 Number of Namespaces: 256 00:08:40.658 Compare Command: Supported 00:08:40.658 Write Uncorrectable Command: Not Supported 00:08:40.658 Dataset Management Command: Supported 00:08:40.658 Write Zeroes Command: Supported 00:08:40.658 Set Features Save Field: Supported 00:08:40.658 Reservations: Not Supported 00:08:40.658 Timestamp: Supported 00:08:40.658 Copy: Supported 00:08:40.658 Volatile Write Cache: Present 00:08:40.658 Atomic Write Unit (Normal): 1 00:08:40.658 Atomic Write Unit (PFail): 1 00:08:40.658 Atomic Compare & Write Unit: 1 00:08:40.658 Fused Compare & Write: Not Supported 00:08:40.658 Scatter-Gather List 00:08:40.658 SGL Command Set: Supported 00:08:40.658 SGL Keyed: Not Supported 00:08:40.658 SGL Bit Bucket Descriptor: Not Supported 00:08:40.658 SGL Metadata Pointer: Not Supported 00:08:40.658 Oversized SGL: Not Supported 00:08:40.658 SGL Metadata Address: Not Supported 00:08:40.658 SGL Offset: Not Supported 00:08:40.658 Transport SGL Data Block: Not Supported 00:08:40.658 Replay Protected Memory Block: Not Supported 00:08:40.658 00:08:40.658 Firmware Slot Information 00:08:40.658 ========================= 00:08:40.658 Active slot: 1 00:08:40.658 Slot 1 Firmware Revision: 1.0 00:08:40.658 00:08:40.658 00:08:40.658 Commands Supported and Effects 00:08:40.658 ============================== 00:08:40.658 Admin Commands 00:08:40.658 -------------- 00:08:40.658 Delete I/O Submission Queue (00h): Supported 00:08:40.658 Create I/O Submission Queue (01h): Supported 00:08:40.658 Get Log Page (02h): Supported 00:08:40.658 Delete I/O Completion Queue (04h): Supported 00:08:40.658 Create I/O Completion Queue (05h): Supported 00:08:40.658 Identify (06h): Supported 00:08:40.658 Abort (08h): Supported 00:08:40.658 Set Features (09h): Supported 00:08:40.658 Get Features (0Ah): Supported 00:08:40.658 Asynchronous Event Request (0Ch): Supported 00:08:40.658 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:40.658 Directive Send (19h): Supported 00:08:40.658 Directive Receive (1Ah): Supported 00:08:40.658 Virtualization Management (1Ch): Supported 00:08:40.658 Doorbell Buffer Config (7Ch): Supported 00:08:40.658 Format NVM (80h): Supported LBA-Change 00:08:40.658 I/O Commands 00:08:40.658 ------------ 00:08:40.658 Flush (00h): Supported LBA-Change 00:08:40.658 Write (01h): Supported LBA-Change 00:08:40.658 Read (02h): Supported 00:08:40.658 Compare (05h): Supported 00:08:40.658 Write Zeroes (08h): Supported LBA-Change 00:08:40.658 Dataset Management (09h): Supported LBA-Change 00:08:40.658 Unknown (0Ch): Supported 00:08:40.658 Unknown (12h): Supported 00:08:40.658 Copy (19h): Supported LBA-Change 00:08:40.658 Unknown (1Dh): Supported LBA-Change 00:08:40.658 00:08:40.658 Error Log 00:08:40.658 ========= 00:08:40.658 00:08:40.658 Arbitration 00:08:40.658 =========== 00:08:40.658 Arbitration Burst: no limit 00:08:40.658 00:08:40.658 Power Management 00:08:40.658 ================ 00:08:40.658 Number of Power States: 1 00:08:40.658 Current Power State: Power State #0 00:08:40.658 Power State #0: 00:08:40.658 Max Power: 25.00 W 00:08:40.658 Non-Operational State: Operational 00:08:40.658 Entry Latency: 16 microseconds 00:08:40.658 Exit Latency: 4 microseconds 00:08:40.658 Relative Read Throughput: 0 00:08:40.658 Relative Read Latency: 0 00:08:40.658 Relative Write Throughput: 0 00:08:40.658 Relative Write Latency: 0 00:08:40.658 Idle Power: Not Reported 00:08:40.658 Active Power: Not Reported 00:08:40.658 Non-Operational Permissive Mode: Not Supported 00:08:40.658 00:08:40.658 Health Information 00:08:40.658 ================== 00:08:40.658 Critical Warnings: 00:08:40.658 Available Spare Space: OK 00:08:40.658 Temperature: OK 00:08:40.658 Device Reliability: OK 00:08:40.658 Read Only: No 00:08:40.658 Volatile Memory Backup: OK 00:08:40.658 Current Temperature: 323 Kelvin (50 Celsius) 00:08:40.658 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:40.658 Available Spare: 0% 00:08:40.658 Available Spare Threshold: 0% 00:08:40.658 Life Percentage Used: 0% 00:08:40.658 Data Units Read: 708 00:08:40.658 Data Units Written: 636 00:08:40.658 Host Read Commands: 37529 00:08:40.658 Host Write Commands: 37315 00:08:40.658 Controller Busy Time: 0 minutes 00:08:40.658 Power Cycles: 0 00:08:40.658 Power On Hours: 0 hours 00:08:40.658 Unsafe Shutdowns: 0 00:08:40.658 Unrecoverable Media Errors: 0 00:08:40.658 Lifetime Error Log Entries: 0 00:08:40.658 Warning Temperature Time: 0 minutes 00:08:40.658 Critical Temperature Time: 0 minutes 00:08:40.658 00:08:40.658 Number of Queues 00:08:40.658 ================ 00:08:40.658 Number of I/O Submission Queues: 64 00:08:40.658 Number of I/O Completion Queues: 64 00:08:40.658 00:08:40.658 ZNS Specific Controller Data 00:08:40.658 ============================ 00:08:40.658 Zone Append Size Limit: 0 00:08:40.658 00:08:40.658 00:08:40.658 Active Namespaces 00:08:40.658 ================= 00:08:40.658 Namespace ID:1 00:08:40.658 Error Recovery Timeout: Unlimited 00:08:40.658 Command Set Identifier: NVM (00h) 00:08:40.658 Deallocate: Supported 00:08:40.658 Deallocated/Unwritten Error: Supported 00:08:40.658 Deallocated Read Value: All 0x00 00:08:40.658 Deallocate in Write Zeroes: Not Supported 00:08:40.658 Deallocated Guard Field: 0xFFFF 00:08:40.658 Flush: Supported 00:08:40.658 Reservation: Not Supported 00:08:40.658 Metadata Transferred as: Separate Metadata Buffer 00:08:40.658 Namespace Sharing Capabilities: Private 00:08:40.658 Size (in LBAs): 1548666 (5GiB) 00:08:40.658 Capacity (in LBAs): 1548666 (5GiB) 00:08:40.658 Utilization (in LBAs): 1548666 (5GiB) 00:08:40.658 Thin Provisioning: Not Supported 00:08:40.658 Per-NS Atomic Units: No 00:08:40.658 Maximum Single Source Range Length: 128 00:08:40.658 Maximum Copy Length: 128 00:08:40.658 Maximum Source Range Count: 128 00:08:40.658 NGUID/EUI64 Never Reused: No 00:08:40.658 Namespace Write Protected: No 00:08:40.658 Number of LBA Formats: 8 00:08:40.658 Current LBA Format: [2024-11-27 04:25:37.021013] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:11.0, 0] process 62839 terminated unexpected 00:08:40.658 LBA Format #07 00:08:40.658 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:40.658 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:40.658 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:40.658 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:40.658 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:40.658 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:40.658 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:40.658 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:40.658 00:08:40.658 NVM Specific Namespace Data 00:08:40.658 =========================== 00:08:40.658 Logical Block Storage Tag Mask: 0 00:08:40.658 Protection Information Capabilities: 00:08:40.658 16b Guard Protection Information Storage Tag Support: No 00:08:40.658 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:40.658 Storage Tag Check Read Support: No 00:08:40.658 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:40.658 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:40.658 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:40.658 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:40.658 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:40.658 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:40.658 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:40.658 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:40.658 ===================================================== 00:08:40.658 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:40.658 ===================================================== 00:08:40.658 Controller Capabilities/Features 00:08:40.658 ================================ 00:08:40.658 Vendor ID: 1b36 00:08:40.658 Subsystem Vendor ID: 1af4 00:08:40.658 Serial Number: 12341 00:08:40.658 Model Number: QEMU NVMe Ctrl 00:08:40.658 Firmware Version: 8.0.0 00:08:40.658 Recommended Arb Burst: 6 00:08:40.659 IEEE OUI Identifier: 00 54 52 00:08:40.659 Multi-path I/O 00:08:40.659 May have multiple subsystem ports: No 00:08:40.659 May have multiple controllers: No 00:08:40.659 Associated with SR-IOV VF: No 00:08:40.659 Max Data Transfer Size: 524288 00:08:40.659 Max Number of Namespaces: 256 00:08:40.659 Max Number of I/O Queues: 64 00:08:40.659 NVMe Specification Version (VS): 1.4 00:08:40.659 NVMe Specification Version (Identify): 1.4 00:08:40.659 Maximum Queue Entries: 2048 00:08:40.659 Contiguous Queues Required: Yes 00:08:40.659 Arbitration Mechanisms Supported 00:08:40.659 Weighted Round Robin: Not Supported 00:08:40.659 Vendor Specific: Not Supported 00:08:40.659 Reset Timeout: 7500 ms 00:08:40.659 Doorbell Stride: 4 bytes 00:08:40.659 NVM Subsystem Reset: Not Supported 00:08:40.659 Command Sets Supported 00:08:40.659 NVM Command Set: Supported 00:08:40.659 Boot Partition: Not Supported 00:08:40.659 Memory Page Size Minimum: 4096 bytes 00:08:40.659 Memory Page Size Maximum: 65536 bytes 00:08:40.659 Persistent Memory Region: Not Supported 00:08:40.659 Optional Asynchronous Events Supported 00:08:40.659 Namespace Attribute Notices: Supported 00:08:40.659 Firmware Activation Notices: Not Supported 00:08:40.659 ANA Change Notices: Not Supported 00:08:40.659 PLE Aggregate Log Change Notices: Not Supported 00:08:40.659 LBA Status Info Alert Notices: Not Supported 00:08:40.659 EGE Aggregate Log Change Notices: Not Supported 00:08:40.659 Normal NVM Subsystem Shutdown event: Not Supported 00:08:40.659 Zone Descriptor Change Notices: Not Supported 00:08:40.659 Discovery Log Change Notices: Not Supported 00:08:40.659 Controller Attributes 00:08:40.659 128-bit Host Identifier: Not Supported 00:08:40.659 Non-Operational Permissive Mode: Not Supported 00:08:40.659 NVM Sets: Not Supported 00:08:40.659 Read Recovery Levels: Not Supported 00:08:40.659 Endurance Groups: Not Supported 00:08:40.659 Predictable Latency Mode: Not Supported 00:08:40.659 Traffic Based Keep ALive: Not Supported 00:08:40.659 Namespace Granularity: Not Supported 00:08:40.659 SQ Associations: Not Supported 00:08:40.659 UUID List: Not Supported 00:08:40.659 Multi-Domain Subsystem: Not Supported 00:08:40.659 Fixed Capacity Management: Not Supported 00:08:40.659 Variable Capacity Management: Not Supported 00:08:40.659 Delete Endurance Group: Not Supported 00:08:40.659 Delete NVM Set: Not Supported 00:08:40.659 Extended LBA Formats Supported: Supported 00:08:40.659 Flexible Data Placement Supported: Not Supported 00:08:40.659 00:08:40.659 Controller Memory Buffer Support 00:08:40.659 ================================ 00:08:40.659 Supported: No 00:08:40.659 00:08:40.659 Persistent Memory Region Support 00:08:40.659 ================================ 00:08:40.659 Supported: No 00:08:40.659 00:08:40.659 Admin Command Set Attributes 00:08:40.659 ============================ 00:08:40.659 Security Send/Receive: Not Supported 00:08:40.659 Format NVM: Supported 00:08:40.659 Firmware Activate/Download: Not Supported 00:08:40.659 Namespace Management: Supported 00:08:40.659 Device Self-Test: Not Supported 00:08:40.659 Directives: Supported 00:08:40.659 NVMe-MI: Not Supported 00:08:40.659 Virtualization Management: Not Supported 00:08:40.659 Doorbell Buffer Config: Supported 00:08:40.659 Get LBA Status Capability: Not Supported 00:08:40.659 Command & Feature Lockdown Capability: Not Supported 00:08:40.659 Abort Command Limit: 4 00:08:40.659 Async Event Request Limit: 4 00:08:40.659 Number of Firmware Slots: N/A 00:08:40.659 Firmware Slot 1 Read-Only: N/A 00:08:40.659 Firmware Activation Without Reset: N/A 00:08:40.659 Multiple Update Detection Support: N/A 00:08:40.659 Firmware Update Granularity: No Information Provided 00:08:40.659 Per-Namespace SMART Log: Yes 00:08:40.659 Asymmetric Namespace Access Log Page: Not Supported 00:08:40.659 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:08:40.659 Command Effects Log Page: Supported 00:08:40.659 Get Log Page Extended Data: Supported 00:08:40.659 Telemetry Log Pages: Not Supported 00:08:40.659 Persistent Event Log Pages: Not Supported 00:08:40.659 Supported Log Pages Log Page: May Support 00:08:40.659 Commands Supported & Effects Log Page: Not Supported 00:08:40.659 Feature Identifiers & Effects Log Page:May Support 00:08:40.659 NVMe-MI Commands & Effects Log Page: May Support 00:08:40.659 Data Area 4 for Telemetry Log: Not Supported 00:08:40.659 Error Log Page Entries Supported: 1 00:08:40.659 Keep Alive: Not Supported 00:08:40.659 00:08:40.659 NVM Command Set Attributes 00:08:40.659 ========================== 00:08:40.659 Submission Queue Entry Size 00:08:40.659 Max: 64 00:08:40.659 Min: 64 00:08:40.659 Completion Queue Entry Size 00:08:40.659 Max: 16 00:08:40.659 Min: 16 00:08:40.659 Number of Namespaces: 256 00:08:40.659 Compare Command: Supported 00:08:40.659 Write Uncorrectable Command: Not Supported 00:08:40.659 Dataset Management Command: Supported 00:08:40.659 Write Zeroes Command: Supported 00:08:40.659 Set Features Save Field: Supported 00:08:40.659 Reservations: Not Supported 00:08:40.659 Timestamp: Supported 00:08:40.659 Copy: Supported 00:08:40.659 Volatile Write Cache: Present 00:08:40.659 Atomic Write Unit (Normal): 1 00:08:40.659 Atomic Write Unit (PFail): 1 00:08:40.659 Atomic Compare & Write Unit: 1 00:08:40.659 Fused Compare & Write: Not Supported 00:08:40.659 Scatter-Gather List 00:08:40.659 SGL Command Set: Supported 00:08:40.659 SGL Keyed: Not Supported 00:08:40.659 SGL Bit Bucket Descriptor: Not Supported 00:08:40.659 SGL Metadata Pointer: Not Supported 00:08:40.659 Oversized SGL: Not Supported 00:08:40.659 SGL Metadata Address: Not Supported 00:08:40.659 SGL Offset: Not Supported 00:08:40.659 Transport SGL Data Block: Not Supported 00:08:40.659 Replay Protected Memory Block: Not Supported 00:08:40.659 00:08:40.659 Firmware Slot Information 00:08:40.659 ========================= 00:08:40.659 Active slot: 1 00:08:40.659 Slot 1 Firmware Revision: 1.0 00:08:40.659 00:08:40.659 00:08:40.659 Commands Supported and Effects 00:08:40.659 ============================== 00:08:40.659 Admin Commands 00:08:40.659 -------------- 00:08:40.659 Delete I/O Submission Queue (00h): Supported 00:08:40.659 Create I/O Submission Queue (01h): Supported 00:08:40.659 Get Log Page (02h): Supported 00:08:40.659 Delete I/O Completion Queue (04h): Supported 00:08:40.659 Create I/O Completion Queue (05h): Supported 00:08:40.659 Identify (06h): Supported 00:08:40.659 Abort (08h): Supported 00:08:40.659 Set Features (09h): Supported 00:08:40.659 Get Features (0Ah): Supported 00:08:40.659 Asynchronous Event Request (0Ch): Supported 00:08:40.659 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:40.659 Directive Send (19h): Supported 00:08:40.659 Directive Receive (1Ah): Supported 00:08:40.659 Virtualization Management (1Ch): Supported 00:08:40.659 Doorbell Buffer Config (7Ch): Supported 00:08:40.659 Format NVM (80h): Supported LBA-Change 00:08:40.659 I/O Commands 00:08:40.659 ------------ 00:08:40.659 Flush (00h): Supported LBA-Change 00:08:40.659 Write (01h): Supported LBA-Change 00:08:40.659 Read (02h): Supported 00:08:40.659 Compare (05h): Supported 00:08:40.659 Write Zeroes (08h): Supported LBA-Change 00:08:40.659 Dataset Management (09h): Supported LBA-Change 00:08:40.659 Unknown (0Ch): Supported 00:08:40.659 Unknown (12h): Supported 00:08:40.659 Copy (19h): Supported LBA-Change 00:08:40.659 Unknown (1Dh): Supported LBA-Change 00:08:40.659 00:08:40.659 Error Log 00:08:40.659 ========= 00:08:40.659 00:08:40.659 Arbitration 00:08:40.659 =========== 00:08:40.659 Arbitration Burst: no limit 00:08:40.659 00:08:40.659 Power Management 00:08:40.659 ================ 00:08:40.659 Number of Power States: 1 00:08:40.659 Current Power State: Power State #0 00:08:40.659 Power State #0: 00:08:40.659 Max Power: 25.00 W 00:08:40.659 Non-Operational State: Operational 00:08:40.659 Entry Latency: 16 microseconds 00:08:40.659 Exit Latency: 4 microseconds 00:08:40.659 Relative Read Throughput: 0 00:08:40.659 Relative Read Latency: 0 00:08:40.659 Relative Write Throughput: 0 00:08:40.659 Relative Write Latency: 0 00:08:40.659 Idle Power: Not Reported 00:08:40.659 Active Power: Not Reported 00:08:40.659 Non-Operational Permissive Mode: Not Supported 00:08:40.659 00:08:40.659 Health Information 00:08:40.659 ================== 00:08:40.659 Critical Warnings: 00:08:40.659 Available Spare Space: OK 00:08:40.659 Temperature: OK 00:08:40.659 Device Reliability: OK 00:08:40.659 Read Only: No 00:08:40.659 Volatile Memory Backup: OK 00:08:40.659 Current Temperature: 323 Kelvin (50 Celsius) 00:08:40.659 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:40.659 Available Spare: 0% 00:08:40.659 Available Spare Threshold: 0% 00:08:40.660 Life Percentage Used: 0% 00:08:40.660 Data Units Read: 1055 00:08:40.660 Data Units Written: 920 00:08:40.660 Host Read Commands: 54905 00:08:40.660 Host Write Commands: 53637 00:08:40.660 Controller Busy Time: 0 minutes 00:08:40.660 Power Cycles: 0 00:08:40.660 Power On Hours: 0 hours 00:08:40.660 Unsafe Shutdowns: 0 00:08:40.660 Unrecoverable Media Errors: 0 00:08:40.660 Lifetime Error Log Entries: 0 00:08:40.660 Warning Temperature Time: 0 minutes 00:08:40.660 Critical Temperature Time: 0 minutes 00:08:40.660 00:08:40.660 Number of Queues 00:08:40.660 ================ 00:08:40.660 Number of I/O Submission Queues: 64 00:08:40.660 Number of I/O Completion Queues: 64 00:08:40.660 00:08:40.660 ZNS Specific Controller Data 00:08:40.660 ============================ 00:08:40.660 Zone Append Size Limit: 0 00:08:40.660 00:08:40.660 00:08:40.660 Active Namespaces 00:08:40.660 ================= 00:08:40.660 Namespace ID:1 00:08:40.660 Error Recovery Timeout: Unlimited 00:08:40.660 Command Set Identifier: NVM (00h) 00:08:40.660 Deallocate: Supported 00:08:40.660 Deallocated/Unwritten Error: Supported 00:08:40.660 Deallocated Read Value: All 0x00 00:08:40.660 Deallocate in Write Zeroes: Not Supported 00:08:40.660 Deallocated Guard Field: 0xFFFF 00:08:40.660 Flush: Supported 00:08:40.660 Reservation: Not Supported 00:08:40.660 Namespace Sharing Capabilities: Private 00:08:40.660 Size (in LBAs): 1310720 (5GiB) 00:08:40.660 Capacity (in LBAs): 1310720 (5GiB) 00:08:40.660 Utilization (in LBAs): 1310720 (5GiB) 00:08:40.660 Thin Provisioning: Not Supported 00:08:40.660 Per-NS Atomic Units: No 00:08:40.660 Maximum Single Source Range Length: 128 00:08:40.660 Maximum Copy Length: 128 00:08:40.660 Maximum Source Range Count: 128 00:08:40.660 NGUID/EUI64 Never Reused: No 00:08:40.660 Namespace Write Protected: No 00:08:40.660 Number of LBA Formats: 8 00:08:40.660 Current LBA Format: LBA Format #04 00:08:40.660 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:40.660 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:40.660 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:40.660 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:40.660 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:40.660 LBA Forma[2024-11-27 04:25:37.023122] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:12.0, 0] process 62839 terminated unexpected 00:08:40.660 t #05: Data Size: 4096 Metadata Size: 8 00:08:40.660 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:40.660 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:40.660 00:08:40.660 NVM Specific Namespace Data 00:08:40.660 =========================== 00:08:40.660 Logical Block Storage Tag Mask: 0 00:08:40.660 Protection Information Capabilities: 00:08:40.660 16b Guard Protection Information Storage Tag Support: No 00:08:40.660 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:40.660 Storage Tag Check Read Support: No 00:08:40.660 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:40.660 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:40.660 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:40.660 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:40.660 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:40.660 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:40.660 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:40.660 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:40.660 ===================================================== 00:08:40.660 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:40.660 ===================================================== 00:08:40.660 Controller Capabilities/Features 00:08:40.660 ================================ 00:08:40.660 Vendor ID: 1b36 00:08:40.660 Subsystem Vendor ID: 1af4 00:08:40.660 Serial Number: 12342 00:08:40.660 Model Number: QEMU NVMe Ctrl 00:08:40.660 Firmware Version: 8.0.0 00:08:40.660 Recommended Arb Burst: 6 00:08:40.660 IEEE OUI Identifier: 00 54 52 00:08:40.660 Multi-path I/O 00:08:40.660 May have multiple subsystem ports: No 00:08:40.660 May have multiple controllers: No 00:08:40.660 Associated with SR-IOV VF: No 00:08:40.660 Max Data Transfer Size: 524288 00:08:40.660 Max Number of Namespaces: 256 00:08:40.660 Max Number of I/O Queues: 64 00:08:40.660 NVMe Specification Version (VS): 1.4 00:08:40.660 NVMe Specification Version (Identify): 1.4 00:08:40.660 Maximum Queue Entries: 2048 00:08:40.660 Contiguous Queues Required: Yes 00:08:40.660 Arbitration Mechanisms Supported 00:08:40.660 Weighted Round Robin: Not Supported 00:08:40.660 Vendor Specific: Not Supported 00:08:40.660 Reset Timeout: 7500 ms 00:08:40.660 Doorbell Stride: 4 bytes 00:08:40.660 NVM Subsystem Reset: Not Supported 00:08:40.660 Command Sets Supported 00:08:40.660 NVM Command Set: Supported 00:08:40.660 Boot Partition: Not Supported 00:08:40.660 Memory Page Size Minimum: 4096 bytes 00:08:40.660 Memory Page Size Maximum: 65536 bytes 00:08:40.660 Persistent Memory Region: Not Supported 00:08:40.660 Optional Asynchronous Events Supported 00:08:40.660 Namespace Attribute Notices: Supported 00:08:40.660 Firmware Activation Notices: Not Supported 00:08:40.660 ANA Change Notices: Not Supported 00:08:40.660 PLE Aggregate Log Change Notices: Not Supported 00:08:40.660 LBA Status Info Alert Notices: Not Supported 00:08:40.660 EGE Aggregate Log Change Notices: Not Supported 00:08:40.660 Normal NVM Subsystem Shutdown event: Not Supported 00:08:40.660 Zone Descriptor Change Notices: Not Supported 00:08:40.660 Discovery Log Change Notices: Not Supported 00:08:40.660 Controller Attributes 00:08:40.660 128-bit Host Identifier: Not Supported 00:08:40.660 Non-Operational Permissive Mode: Not Supported 00:08:40.660 NVM Sets: Not Supported 00:08:40.660 Read Recovery Levels: Not Supported 00:08:40.660 Endurance Groups: Not Supported 00:08:40.660 Predictable Latency Mode: Not Supported 00:08:40.660 Traffic Based Keep ALive: Not Supported 00:08:40.660 Namespace Granularity: Not Supported 00:08:40.660 SQ Associations: Not Supported 00:08:40.660 UUID List: Not Supported 00:08:40.660 Multi-Domain Subsystem: Not Supported 00:08:40.660 Fixed Capacity Management: Not Supported 00:08:40.660 Variable Capacity Management: Not Supported 00:08:40.660 Delete Endurance Group: Not Supported 00:08:40.660 Delete NVM Set: Not Supported 00:08:40.660 Extended LBA Formats Supported: Supported 00:08:40.660 Flexible Data Placement Supported: Not Supported 00:08:40.660 00:08:40.660 Controller Memory Buffer Support 00:08:40.660 ================================ 00:08:40.660 Supported: No 00:08:40.660 00:08:40.660 Persistent Memory Region Support 00:08:40.660 ================================ 00:08:40.660 Supported: No 00:08:40.660 00:08:40.660 Admin Command Set Attributes 00:08:40.660 ============================ 00:08:40.660 Security Send/Receive: Not Supported 00:08:40.660 Format NVM: Supported 00:08:40.660 Firmware Activate/Download: Not Supported 00:08:40.660 Namespace Management: Supported 00:08:40.660 Device Self-Test: Not Supported 00:08:40.660 Directives: Supported 00:08:40.660 NVMe-MI: Not Supported 00:08:40.660 Virtualization Management: Not Supported 00:08:40.660 Doorbell Buffer Config: Supported 00:08:40.660 Get LBA Status Capability: Not Supported 00:08:40.660 Command & Feature Lockdown Capability: Not Supported 00:08:40.661 Abort Command Limit: 4 00:08:40.661 Async Event Request Limit: 4 00:08:40.661 Number of Firmware Slots: N/A 00:08:40.661 Firmware Slot 1 Read-Only: N/A 00:08:40.661 Firmware Activation Without Reset: N/A 00:08:40.661 Multiple Update Detection Support: N/A 00:08:40.661 Firmware Update Granularity: No Information Provided 00:08:40.661 Per-Namespace SMART Log: Yes 00:08:40.661 Asymmetric Namespace Access Log Page: Not Supported 00:08:40.661 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:08:40.661 Command Effects Log Page: Supported 00:08:40.661 Get Log Page Extended Data: Supported 00:08:40.661 Telemetry Log Pages: Not Supported 00:08:40.661 Persistent Event Log Pages: Not Supported 00:08:40.661 Supported Log Pages Log Page: May Support 00:08:40.661 Commands Supported & Effects Log Page: Not Supported 00:08:40.661 Feature Identifiers & Effects Log Page:May Support 00:08:40.661 NVMe-MI Commands & Effects Log Page: May Support 00:08:40.661 Data Area 4 for Telemetry Log: Not Supported 00:08:40.661 Error Log Page Entries Supported: 1 00:08:40.661 Keep Alive: Not Supported 00:08:40.661 00:08:40.661 NVM Command Set Attributes 00:08:40.661 ========================== 00:08:40.661 Submission Queue Entry Size 00:08:40.661 Max: 64 00:08:40.661 Min: 64 00:08:40.661 Completion Queue Entry Size 00:08:40.661 Max: 16 00:08:40.661 Min: 16 00:08:40.661 Number of Namespaces: 256 00:08:40.661 Compare Command: Supported 00:08:40.661 Write Uncorrectable Command: Not Supported 00:08:40.661 Dataset Management Command: Supported 00:08:40.661 Write Zeroes Command: Supported 00:08:40.661 Set Features Save Field: Supported 00:08:40.661 Reservations: Not Supported 00:08:40.661 Timestamp: Supported 00:08:40.661 Copy: Supported 00:08:40.661 Volatile Write Cache: Present 00:08:40.661 Atomic Write Unit (Normal): 1 00:08:40.661 Atomic Write Unit (PFail): 1 00:08:40.661 Atomic Compare & Write Unit: 1 00:08:40.661 Fused Compare & Write: Not Supported 00:08:40.661 Scatter-Gather List 00:08:40.661 SGL Command Set: Supported 00:08:40.661 SGL Keyed: Not Supported 00:08:40.661 SGL Bit Bucket Descriptor: Not Supported 00:08:40.661 SGL Metadata Pointer: Not Supported 00:08:40.661 Oversized SGL: Not Supported 00:08:40.661 SGL Metadata Address: Not Supported 00:08:40.661 SGL Offset: Not Supported 00:08:40.661 Transport SGL Data Block: Not Supported 00:08:40.661 Replay Protected Memory Block: Not Supported 00:08:40.661 00:08:40.661 Firmware Slot Information 00:08:40.661 ========================= 00:08:40.661 Active slot: 1 00:08:40.661 Slot 1 Firmware Revision: 1.0 00:08:40.661 00:08:40.661 00:08:40.661 Commands Supported and Effects 00:08:40.661 ============================== 00:08:40.661 Admin Commands 00:08:40.661 -------------- 00:08:40.661 Delete I/O Submission Queue (00h): Supported 00:08:40.661 Create I/O Submission Queue (01h): Supported 00:08:40.661 Get Log Page (02h): Supported 00:08:40.661 Delete I/O Completion Queue (04h): Supported 00:08:40.661 Create I/O Completion Queue (05h): Supported 00:08:40.661 Identify (06h): Supported 00:08:40.661 Abort (08h): Supported 00:08:40.661 Set Features (09h): Supported 00:08:40.661 Get Features (0Ah): Supported 00:08:40.661 Asynchronous Event Request (0Ch): Supported 00:08:40.661 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:40.661 Directive Send (19h): Supported 00:08:40.661 Directive Receive (1Ah): Supported 00:08:40.661 Virtualization Management (1Ch): Supported 00:08:40.661 Doorbell Buffer Config (7Ch): Supported 00:08:40.661 Format NVM (80h): Supported LBA-Change 00:08:40.661 I/O Commands 00:08:40.661 ------------ 00:08:40.661 Flush (00h): Supported LBA-Change 00:08:40.661 Write (01h): Supported LBA-Change 00:08:40.661 Read (02h): Supported 00:08:40.661 Compare (05h): Supported 00:08:40.661 Write Zeroes (08h): Supported LBA-Change 00:08:40.661 Dataset Management (09h): Supported LBA-Change 00:08:40.661 Unknown (0Ch): Supported 00:08:40.661 Unknown (12h): Supported 00:08:40.661 Copy (19h): Supported LBA-Change 00:08:40.661 Unknown (1Dh): Supported LBA-Change 00:08:40.661 00:08:40.661 Error Log 00:08:40.661 ========= 00:08:40.661 00:08:40.661 Arbitration 00:08:40.661 =========== 00:08:40.661 Arbitration Burst: no limit 00:08:40.661 00:08:40.661 Power Management 00:08:40.661 ================ 00:08:40.661 Number of Power States: 1 00:08:40.661 Current Power State: Power State #0 00:08:40.661 Power State #0: 00:08:40.661 Max Power: 25.00 W 00:08:40.661 Non-Operational State: Operational 00:08:40.661 Entry Latency: 16 microseconds 00:08:40.661 Exit Latency: 4 microseconds 00:08:40.661 Relative Read Throughput: 0 00:08:40.661 Relative Read Latency: 0 00:08:40.661 Relative Write Throughput: 0 00:08:40.661 Relative Write Latency: 0 00:08:40.661 Idle Power: Not Reported 00:08:40.661 Active Power: Not Reported 00:08:40.661 Non-Operational Permissive Mode: Not Supported 00:08:40.661 00:08:40.661 Health Information 00:08:40.661 ================== 00:08:40.661 Critical Warnings: 00:08:40.661 Available Spare Space: OK 00:08:40.661 Temperature: OK 00:08:40.661 Device Reliability: OK 00:08:40.661 Read Only: No 00:08:40.661 Volatile Memory Backup: OK 00:08:40.661 Current Temperature: 323 Kelvin (50 Celsius) 00:08:40.661 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:40.661 Available Spare: 0% 00:08:40.661 Available Spare Threshold: 0% 00:08:40.661 Life Percentage Used: 0% 00:08:40.661 Data Units Read: 2270 00:08:40.661 Data Units Written: 2057 00:08:40.661 Host Read Commands: 114631 00:08:40.661 Host Write Commands: 112900 00:08:40.661 Controller Busy Time: 0 minutes 00:08:40.661 Power Cycles: 0 00:08:40.661 Power On Hours: 0 hours 00:08:40.661 Unsafe Shutdowns: 0 00:08:40.661 Unrecoverable Media Errors: 0 00:08:40.661 Lifetime Error Log Entries: 0 00:08:40.661 Warning Temperature Time: 0 minutes 00:08:40.661 Critical Temperature Time: 0 minutes 00:08:40.661 00:08:40.661 Number of Queues 00:08:40.661 ================ 00:08:40.661 Number of I/O Submission Queues: 64 00:08:40.661 Number of I/O Completion Queues: 64 00:08:40.661 00:08:40.661 ZNS Specific Controller Data 00:08:40.661 ============================ 00:08:40.661 Zone Append Size Limit: 0 00:08:40.661 00:08:40.661 00:08:40.661 Active Namespaces 00:08:40.661 ================= 00:08:40.661 Namespace ID:1 00:08:40.661 Error Recovery Timeout: Unlimited 00:08:40.661 Command Set Identifier: NVM (00h) 00:08:40.661 Deallocate: Supported 00:08:40.661 Deallocated/Unwritten Error: Supported 00:08:40.661 Deallocated Read Value: All 0x00 00:08:40.661 Deallocate in Write Zeroes: Not Supported 00:08:40.661 Deallocated Guard Field: 0xFFFF 00:08:40.661 Flush: Supported 00:08:40.661 Reservation: Not Supported 00:08:40.661 Namespace Sharing Capabilities: Private 00:08:40.661 Size (in LBAs): 1048576 (4GiB) 00:08:40.661 Capacity (in LBAs): 1048576 (4GiB) 00:08:40.661 Utilization (in LBAs): 1048576 (4GiB) 00:08:40.661 Thin Provisioning: Not Supported 00:08:40.661 Per-NS Atomic Units: No 00:08:40.661 Maximum Single Source Range Length: 128 00:08:40.661 Maximum Copy Length: 128 00:08:40.661 Maximum Source Range Count: 128 00:08:40.661 NGUID/EUI64 Never Reused: No 00:08:40.661 Namespace Write Protected: No 00:08:40.661 Number of LBA Formats: 8 00:08:40.661 Current LBA Format: LBA Format #04 00:08:40.661 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:40.661 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:40.661 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:40.661 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:40.661 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:40.661 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:40.661 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:40.661 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:40.661 00:08:40.661 NVM Specific Namespace Data 00:08:40.661 =========================== 00:08:40.661 Logical Block Storage Tag Mask: 0 00:08:40.661 Protection Information Capabilities: 00:08:40.661 16b Guard Protection Information Storage Tag Support: No 00:08:40.661 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:40.661 Storage Tag Check Read Support: No 00:08:40.661 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:40.661 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:40.661 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:40.661 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:40.661 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:40.661 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:40.661 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:40.661 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:40.661 Namespace ID:2 00:08:40.661 Error Recovery Timeout: Unlimited 00:08:40.662 Command Set Identifier: NVM (00h) 00:08:40.662 Deallocate: Supported 00:08:40.662 Deallocated/Unwritten Error: Supported 00:08:40.662 Deallocated Read Value: All 0x00 00:08:40.662 Deallocate in Write Zeroes: Not Supported 00:08:40.662 Deallocated Guard Field: 0xFFFF 00:08:40.662 Flush: Supported 00:08:40.662 Reservation: Not Supported 00:08:40.662 Namespace Sharing Capabilities: Private 00:08:40.662 Size (in LBAs): 1048576 (4GiB) 00:08:40.662 Capacity (in LBAs): 1048576 (4GiB) 00:08:40.662 Utilization (in LBAs): 1048576 (4GiB) 00:08:40.662 Thin Provisioning: Not Supported 00:08:40.662 Per-NS Atomic Units: No 00:08:40.662 Maximum Single Source Range Length: 128 00:08:40.662 Maximum Copy Length: 128 00:08:40.662 Maximum Source Range Count: 128 00:08:40.662 NGUID/EUI64 Never Reused: No 00:08:40.662 Namespace Write Protected: No 00:08:40.662 Number of LBA Formats: 8 00:08:40.662 Current LBA Format: LBA Format #04 00:08:40.662 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:40.662 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:40.662 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:40.662 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:40.662 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:40.662 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:40.662 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:40.662 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:40.662 00:08:40.662 NVM Specific Namespace Data 00:08:40.662 =========================== 00:08:40.662 Logical Block Storage Tag Mask: 0 00:08:40.662 Protection Information Capabilities: 00:08:40.662 16b Guard Protection Information Storage Tag Support: No 00:08:40.662 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:40.662 Storage Tag Check Read Support: No 00:08:40.662 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:40.662 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:40.662 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:40.662 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:40.662 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:40.662 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:40.662 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:40.662 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:40.662 Namespace ID:3 00:08:40.662 Error Recovery Timeout: Unlimited 00:08:40.662 Command Set Identifier: NVM (00h) 00:08:40.662 Deallocate: Supported 00:08:40.662 Deallocated/Unwritten Error: Supported 00:08:40.662 Deallocated Read Value: All 0x00 00:08:40.662 Deallocate in Write Zeroes: Not Supported 00:08:40.662 Deallocated Guard Field: 0xFFFF 00:08:40.662 Flush: Supported 00:08:40.662 Reservation: Not Supported 00:08:40.662 Namespace Sharing Capabilities: Private 00:08:40.662 Size (in LBAs): 1048576 (4GiB) 00:08:40.662 Capacity (in LBAs): 1048576 (4GiB) 00:08:40.662 Utilization (in LBAs): 1048576 (4GiB) 00:08:40.662 Thin Provisioning: Not Supported 00:08:40.662 Per-NS Atomic Units: No 00:08:40.662 Maximum Single Source Range Length: 128 00:08:40.662 Maximum Copy Length: 128 00:08:40.662 Maximum Source Range Count: 128 00:08:40.662 NGUID/EUI64 Never Reused: No 00:08:40.662 Namespace Write Protected: No 00:08:40.662 Number of LBA Formats: 8 00:08:40.662 Current LBA Format: LBA Format #04 00:08:40.662 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:40.662 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:40.662 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:40.662 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:40.662 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:40.662 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:40.662 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:40.662 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:40.662 00:08:40.662 NVM Specific Namespace Data 00:08:40.662 =========================== 00:08:40.662 Logical Block Storage Tag Mask: 0 00:08:40.662 Protection Information Capabilities: 00:08:40.662 16b Guard Protection Information Storage Tag Support: No 00:08:40.662 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:40.662 Storage Tag Check Read Support: No 00:08:40.662 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:40.662 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:40.662 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:40.662 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:40.662 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:40.662 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:40.662 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:40.662 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:40.662 04:25:37 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:08:40.662 04:25:37 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:08:40.925 ===================================================== 00:08:40.925 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:40.925 ===================================================== 00:08:40.925 Controller Capabilities/Features 00:08:40.925 ================================ 00:08:40.925 Vendor ID: 1b36 00:08:40.925 Subsystem Vendor ID: 1af4 00:08:40.925 Serial Number: 12340 00:08:40.925 Model Number: QEMU NVMe Ctrl 00:08:40.925 Firmware Version: 8.0.0 00:08:40.925 Recommended Arb Burst: 6 00:08:40.925 IEEE OUI Identifier: 00 54 52 00:08:40.925 Multi-path I/O 00:08:40.925 May have multiple subsystem ports: No 00:08:40.925 May have multiple controllers: No 00:08:40.925 Associated with SR-IOV VF: No 00:08:40.925 Max Data Transfer Size: 524288 00:08:40.925 Max Number of Namespaces: 256 00:08:40.925 Max Number of I/O Queues: 64 00:08:40.925 NVMe Specification Version (VS): 1.4 00:08:40.925 NVMe Specification Version (Identify): 1.4 00:08:40.925 Maximum Queue Entries: 2048 00:08:40.925 Contiguous Queues Required: Yes 00:08:40.925 Arbitration Mechanisms Supported 00:08:40.925 Weighted Round Robin: Not Supported 00:08:40.925 Vendor Specific: Not Supported 00:08:40.925 Reset Timeout: 7500 ms 00:08:40.925 Doorbell Stride: 4 bytes 00:08:40.925 NVM Subsystem Reset: Not Supported 00:08:40.925 Command Sets Supported 00:08:40.925 NVM Command Set: Supported 00:08:40.925 Boot Partition: Not Supported 00:08:40.925 Memory Page Size Minimum: 4096 bytes 00:08:40.925 Memory Page Size Maximum: 65536 bytes 00:08:40.925 Persistent Memory Region: Not Supported 00:08:40.925 Optional Asynchronous Events Supported 00:08:40.925 Namespace Attribute Notices: Supported 00:08:40.925 Firmware Activation Notices: Not Supported 00:08:40.925 ANA Change Notices: Not Supported 00:08:40.925 PLE Aggregate Log Change Notices: Not Supported 00:08:40.925 LBA Status Info Alert Notices: Not Supported 00:08:40.925 EGE Aggregate Log Change Notices: Not Supported 00:08:40.925 Normal NVM Subsystem Shutdown event: Not Supported 00:08:40.925 Zone Descriptor Change Notices: Not Supported 00:08:40.925 Discovery Log Change Notices: Not Supported 00:08:40.925 Controller Attributes 00:08:40.925 128-bit Host Identifier: Not Supported 00:08:40.925 Non-Operational Permissive Mode: Not Supported 00:08:40.925 NVM Sets: Not Supported 00:08:40.925 Read Recovery Levels: Not Supported 00:08:40.925 Endurance Groups: Not Supported 00:08:40.925 Predictable Latency Mode: Not Supported 00:08:40.925 Traffic Based Keep ALive: Not Supported 00:08:40.925 Namespace Granularity: Not Supported 00:08:40.925 SQ Associations: Not Supported 00:08:40.925 UUID List: Not Supported 00:08:40.925 Multi-Domain Subsystem: Not Supported 00:08:40.925 Fixed Capacity Management: Not Supported 00:08:40.925 Variable Capacity Management: Not Supported 00:08:40.925 Delete Endurance Group: Not Supported 00:08:40.925 Delete NVM Set: Not Supported 00:08:40.925 Extended LBA Formats Supported: Supported 00:08:40.925 Flexible Data Placement Supported: Not Supported 00:08:40.925 00:08:40.925 Controller Memory Buffer Support 00:08:40.925 ================================ 00:08:40.925 Supported: No 00:08:40.925 00:08:40.925 Persistent Memory Region Support 00:08:40.925 ================================ 00:08:40.925 Supported: No 00:08:40.925 00:08:40.925 Admin Command Set Attributes 00:08:40.925 ============================ 00:08:40.925 Security Send/Receive: Not Supported 00:08:40.925 Format NVM: Supported 00:08:40.925 Firmware Activate/Download: Not Supported 00:08:40.925 Namespace Management: Supported 00:08:40.925 Device Self-Test: Not Supported 00:08:40.925 Directives: Supported 00:08:40.925 NVMe-MI: Not Supported 00:08:40.925 Virtualization Management: Not Supported 00:08:40.925 Doorbell Buffer Config: Supported 00:08:40.925 Get LBA Status Capability: Not Supported 00:08:40.925 Command & Feature Lockdown Capability: Not Supported 00:08:40.925 Abort Command Limit: 4 00:08:40.925 Async Event Request Limit: 4 00:08:40.925 Number of Firmware Slots: N/A 00:08:40.925 Firmware Slot 1 Read-Only: N/A 00:08:40.925 Firmware Activation Without Reset: N/A 00:08:40.925 Multiple Update Detection Support: N/A 00:08:40.925 Firmware Update Granularity: No Information Provided 00:08:40.925 Per-Namespace SMART Log: Yes 00:08:40.925 Asymmetric Namespace Access Log Page: Not Supported 00:08:40.925 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:08:40.925 Command Effects Log Page: Supported 00:08:40.925 Get Log Page Extended Data: Supported 00:08:40.925 Telemetry Log Pages: Not Supported 00:08:40.925 Persistent Event Log Pages: Not Supported 00:08:40.925 Supported Log Pages Log Page: May Support 00:08:40.925 Commands Supported & Effects Log Page: Not Supported 00:08:40.925 Feature Identifiers & Effects Log Page:May Support 00:08:40.925 NVMe-MI Commands & Effects Log Page: May Support 00:08:40.925 Data Area 4 for Telemetry Log: Not Supported 00:08:40.925 Error Log Page Entries Supported: 1 00:08:40.925 Keep Alive: Not Supported 00:08:40.925 00:08:40.925 NVM Command Set Attributes 00:08:40.925 ========================== 00:08:40.925 Submission Queue Entry Size 00:08:40.925 Max: 64 00:08:40.925 Min: 64 00:08:40.925 Completion Queue Entry Size 00:08:40.925 Max: 16 00:08:40.925 Min: 16 00:08:40.925 Number of Namespaces: 256 00:08:40.925 Compare Command: Supported 00:08:40.925 Write Uncorrectable Command: Not Supported 00:08:40.925 Dataset Management Command: Supported 00:08:40.925 Write Zeroes Command: Supported 00:08:40.925 Set Features Save Field: Supported 00:08:40.925 Reservations: Not Supported 00:08:40.925 Timestamp: Supported 00:08:40.925 Copy: Supported 00:08:40.925 Volatile Write Cache: Present 00:08:40.925 Atomic Write Unit (Normal): 1 00:08:40.925 Atomic Write Unit (PFail): 1 00:08:40.925 Atomic Compare & Write Unit: 1 00:08:40.925 Fused Compare & Write: Not Supported 00:08:40.925 Scatter-Gather List 00:08:40.925 SGL Command Set: Supported 00:08:40.925 SGL Keyed: Not Supported 00:08:40.925 SGL Bit Bucket Descriptor: Not Supported 00:08:40.925 SGL Metadata Pointer: Not Supported 00:08:40.925 Oversized SGL: Not Supported 00:08:40.925 SGL Metadata Address: Not Supported 00:08:40.925 SGL Offset: Not Supported 00:08:40.925 Transport SGL Data Block: Not Supported 00:08:40.925 Replay Protected Memory Block: Not Supported 00:08:40.925 00:08:40.925 Firmware Slot Information 00:08:40.925 ========================= 00:08:40.925 Active slot: 1 00:08:40.925 Slot 1 Firmware Revision: 1.0 00:08:40.925 00:08:40.925 00:08:40.925 Commands Supported and Effects 00:08:40.925 ============================== 00:08:40.925 Admin Commands 00:08:40.925 -------------- 00:08:40.925 Delete I/O Submission Queue (00h): Supported 00:08:40.925 Create I/O Submission Queue (01h): Supported 00:08:40.925 Get Log Page (02h): Supported 00:08:40.926 Delete I/O Completion Queue (04h): Supported 00:08:40.926 Create I/O Completion Queue (05h): Supported 00:08:40.926 Identify (06h): Supported 00:08:40.926 Abort (08h): Supported 00:08:40.926 Set Features (09h): Supported 00:08:40.926 Get Features (0Ah): Supported 00:08:40.926 Asynchronous Event Request (0Ch): Supported 00:08:40.926 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:40.926 Directive Send (19h): Supported 00:08:40.926 Directive Receive (1Ah): Supported 00:08:40.926 Virtualization Management (1Ch): Supported 00:08:40.926 Doorbell Buffer Config (7Ch): Supported 00:08:40.926 Format NVM (80h): Supported LBA-Change 00:08:40.926 I/O Commands 00:08:40.926 ------------ 00:08:40.926 Flush (00h): Supported LBA-Change 00:08:40.926 Write (01h): Supported LBA-Change 00:08:40.926 Read (02h): Supported 00:08:40.926 Compare (05h): Supported 00:08:40.926 Write Zeroes (08h): Supported LBA-Change 00:08:40.926 Dataset Management (09h): Supported LBA-Change 00:08:40.926 Unknown (0Ch): Supported 00:08:40.926 Unknown (12h): Supported 00:08:40.926 Copy (19h): Supported LBA-Change 00:08:40.926 Unknown (1Dh): Supported LBA-Change 00:08:40.926 00:08:40.926 Error Log 00:08:40.926 ========= 00:08:40.926 00:08:40.926 Arbitration 00:08:40.926 =========== 00:08:40.926 Arbitration Burst: no limit 00:08:40.926 00:08:40.926 Power Management 00:08:40.926 ================ 00:08:40.926 Number of Power States: 1 00:08:40.926 Current Power State: Power State #0 00:08:40.926 Power State #0: 00:08:40.926 Max Power: 25.00 W 00:08:40.926 Non-Operational State: Operational 00:08:40.926 Entry Latency: 16 microseconds 00:08:40.926 Exit Latency: 4 microseconds 00:08:40.926 Relative Read Throughput: 0 00:08:40.926 Relative Read Latency: 0 00:08:40.926 Relative Write Throughput: 0 00:08:40.926 Relative Write Latency: 0 00:08:40.926 Idle Power: Not Reported 00:08:40.926 Active Power: Not Reported 00:08:40.926 Non-Operational Permissive Mode: Not Supported 00:08:40.926 00:08:40.926 Health Information 00:08:40.926 ================== 00:08:40.926 Critical Warnings: 00:08:40.926 Available Spare Space: OK 00:08:40.926 Temperature: OK 00:08:40.926 Device Reliability: OK 00:08:40.926 Read Only: No 00:08:40.926 Volatile Memory Backup: OK 00:08:40.926 Current Temperature: 323 Kelvin (50 Celsius) 00:08:40.926 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:40.926 Available Spare: 0% 00:08:40.926 Available Spare Threshold: 0% 00:08:40.926 Life Percentage Used: 0% 00:08:40.926 Data Units Read: 708 00:08:40.926 Data Units Written: 636 00:08:40.926 Host Read Commands: 37529 00:08:40.926 Host Write Commands: 37315 00:08:40.926 Controller Busy Time: 0 minutes 00:08:40.926 Power Cycles: 0 00:08:40.926 Power On Hours: 0 hours 00:08:40.926 Unsafe Shutdowns: 0 00:08:40.926 Unrecoverable Media Errors: 0 00:08:40.926 Lifetime Error Log Entries: 0 00:08:40.926 Warning Temperature Time: 0 minutes 00:08:40.926 Critical Temperature Time: 0 minutes 00:08:40.926 00:08:40.926 Number of Queues 00:08:40.926 ================ 00:08:40.926 Number of I/O Submission Queues: 64 00:08:40.926 Number of I/O Completion Queues: 64 00:08:40.926 00:08:40.926 ZNS Specific Controller Data 00:08:40.926 ============================ 00:08:40.926 Zone Append Size Limit: 0 00:08:40.926 00:08:40.926 00:08:40.926 Active Namespaces 00:08:40.926 ================= 00:08:40.926 Namespace ID:1 00:08:40.926 Error Recovery Timeout: Unlimited 00:08:40.926 Command Set Identifier: NVM (00h) 00:08:40.926 Deallocate: Supported 00:08:40.926 Deallocated/Unwritten Error: Supported 00:08:40.926 Deallocated Read Value: All 0x00 00:08:40.926 Deallocate in Write Zeroes: Not Supported 00:08:40.926 Deallocated Guard Field: 0xFFFF 00:08:40.926 Flush: Supported 00:08:40.926 Reservation: Not Supported 00:08:40.926 Metadata Transferred as: Separate Metadata Buffer 00:08:40.926 Namespace Sharing Capabilities: Private 00:08:40.926 Size (in LBAs): 1548666 (5GiB) 00:08:40.926 Capacity (in LBAs): 1548666 (5GiB) 00:08:40.926 Utilization (in LBAs): 1548666 (5GiB) 00:08:40.926 Thin Provisioning: Not Supported 00:08:40.926 Per-NS Atomic Units: No 00:08:40.926 Maximum Single Source Range Length: 128 00:08:40.926 Maximum Copy Length: 128 00:08:40.926 Maximum Source Range Count: 128 00:08:40.926 NGUID/EUI64 Never Reused: No 00:08:40.926 Namespace Write Protected: No 00:08:40.926 Number of LBA Formats: 8 00:08:40.926 Current LBA Format: LBA Format #07 00:08:40.926 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:40.926 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:40.926 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:40.926 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:40.926 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:40.926 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:40.926 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:40.926 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:40.926 00:08:40.926 NVM Specific Namespace Data 00:08:40.926 =========================== 00:08:40.926 Logical Block Storage Tag Mask: 0 00:08:40.926 Protection Information Capabilities: 00:08:40.926 16b Guard Protection Information Storage Tag Support: No 00:08:40.926 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:40.926 Storage Tag Check Read Support: No 00:08:40.926 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:40.926 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:40.926 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:40.926 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:40.926 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:40.926 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:40.926 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:40.926 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:40.926 04:25:37 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:08:40.926 04:25:37 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' -i 0 00:08:41.189 ===================================================== 00:08:41.189 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:41.189 ===================================================== 00:08:41.189 Controller Capabilities/Features 00:08:41.189 ================================ 00:08:41.189 Vendor ID: 1b36 00:08:41.189 Subsystem Vendor ID: 1af4 00:08:41.189 Serial Number: 12341 00:08:41.189 Model Number: QEMU NVMe Ctrl 00:08:41.189 Firmware Version: 8.0.0 00:08:41.189 Recommended Arb Burst: 6 00:08:41.189 IEEE OUI Identifier: 00 54 52 00:08:41.189 Multi-path I/O 00:08:41.189 May have multiple subsystem ports: No 00:08:41.189 May have multiple controllers: No 00:08:41.189 Associated with SR-IOV VF: No 00:08:41.189 Max Data Transfer Size: 524288 00:08:41.189 Max Number of Namespaces: 256 00:08:41.189 Max Number of I/O Queues: 64 00:08:41.189 NVMe Specification Version (VS): 1.4 00:08:41.189 NVMe Specification Version (Identify): 1.4 00:08:41.189 Maximum Queue Entries: 2048 00:08:41.189 Contiguous Queues Required: Yes 00:08:41.189 Arbitration Mechanisms Supported 00:08:41.189 Weighted Round Robin: Not Supported 00:08:41.189 Vendor Specific: Not Supported 00:08:41.189 Reset Timeout: 7500 ms 00:08:41.189 Doorbell Stride: 4 bytes 00:08:41.189 NVM Subsystem Reset: Not Supported 00:08:41.189 Command Sets Supported 00:08:41.189 NVM Command Set: Supported 00:08:41.189 Boot Partition: Not Supported 00:08:41.189 Memory Page Size Minimum: 4096 bytes 00:08:41.189 Memory Page Size Maximum: 65536 bytes 00:08:41.189 Persistent Memory Region: Not Supported 00:08:41.189 Optional Asynchronous Events Supported 00:08:41.189 Namespace Attribute Notices: Supported 00:08:41.189 Firmware Activation Notices: Not Supported 00:08:41.189 ANA Change Notices: Not Supported 00:08:41.189 PLE Aggregate Log Change Notices: Not Supported 00:08:41.189 LBA Status Info Alert Notices: Not Supported 00:08:41.189 EGE Aggregate Log Change Notices: Not Supported 00:08:41.189 Normal NVM Subsystem Shutdown event: Not Supported 00:08:41.189 Zone Descriptor Change Notices: Not Supported 00:08:41.189 Discovery Log Change Notices: Not Supported 00:08:41.189 Controller Attributes 00:08:41.189 128-bit Host Identifier: Not Supported 00:08:41.189 Non-Operational Permissive Mode: Not Supported 00:08:41.189 NVM Sets: Not Supported 00:08:41.189 Read Recovery Levels: Not Supported 00:08:41.189 Endurance Groups: Not Supported 00:08:41.189 Predictable Latency Mode: Not Supported 00:08:41.189 Traffic Based Keep ALive: Not Supported 00:08:41.189 Namespace Granularity: Not Supported 00:08:41.189 SQ Associations: Not Supported 00:08:41.189 UUID List: Not Supported 00:08:41.189 Multi-Domain Subsystem: Not Supported 00:08:41.189 Fixed Capacity Management: Not Supported 00:08:41.189 Variable Capacity Management: Not Supported 00:08:41.189 Delete Endurance Group: Not Supported 00:08:41.189 Delete NVM Set: Not Supported 00:08:41.189 Extended LBA Formats Supported: Supported 00:08:41.189 Flexible Data Placement Supported: Not Supported 00:08:41.189 00:08:41.189 Controller Memory Buffer Support 00:08:41.189 ================================ 00:08:41.189 Supported: No 00:08:41.189 00:08:41.189 Persistent Memory Region Support 00:08:41.189 ================================ 00:08:41.189 Supported: No 00:08:41.189 00:08:41.189 Admin Command Set Attributes 00:08:41.189 ============================ 00:08:41.189 Security Send/Receive: Not Supported 00:08:41.189 Format NVM: Supported 00:08:41.189 Firmware Activate/Download: Not Supported 00:08:41.189 Namespace Management: Supported 00:08:41.189 Device Self-Test: Not Supported 00:08:41.189 Directives: Supported 00:08:41.189 NVMe-MI: Not Supported 00:08:41.189 Virtualization Management: Not Supported 00:08:41.189 Doorbell Buffer Config: Supported 00:08:41.189 Get LBA Status Capability: Not Supported 00:08:41.189 Command & Feature Lockdown Capability: Not Supported 00:08:41.189 Abort Command Limit: 4 00:08:41.189 Async Event Request Limit: 4 00:08:41.189 Number of Firmware Slots: N/A 00:08:41.189 Firmware Slot 1 Read-Only: N/A 00:08:41.189 Firmware Activation Without Reset: N/A 00:08:41.189 Multiple Update Detection Support: N/A 00:08:41.189 Firmware Update Granularity: No Information Provided 00:08:41.189 Per-Namespace SMART Log: Yes 00:08:41.189 Asymmetric Namespace Access Log Page: Not Supported 00:08:41.189 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:08:41.189 Command Effects Log Page: Supported 00:08:41.189 Get Log Page Extended Data: Supported 00:08:41.189 Telemetry Log Pages: Not Supported 00:08:41.189 Persistent Event Log Pages: Not Supported 00:08:41.189 Supported Log Pages Log Page: May Support 00:08:41.189 Commands Supported & Effects Log Page: Not Supported 00:08:41.189 Feature Identifiers & Effects Log Page:May Support 00:08:41.189 NVMe-MI Commands & Effects Log Page: May Support 00:08:41.189 Data Area 4 for Telemetry Log: Not Supported 00:08:41.189 Error Log Page Entries Supported: 1 00:08:41.189 Keep Alive: Not Supported 00:08:41.189 00:08:41.189 NVM Command Set Attributes 00:08:41.189 ========================== 00:08:41.189 Submission Queue Entry Size 00:08:41.189 Max: 64 00:08:41.189 Min: 64 00:08:41.189 Completion Queue Entry Size 00:08:41.189 Max: 16 00:08:41.189 Min: 16 00:08:41.189 Number of Namespaces: 256 00:08:41.189 Compare Command: Supported 00:08:41.189 Write Uncorrectable Command: Not Supported 00:08:41.189 Dataset Management Command: Supported 00:08:41.189 Write Zeroes Command: Supported 00:08:41.189 Set Features Save Field: Supported 00:08:41.189 Reservations: Not Supported 00:08:41.189 Timestamp: Supported 00:08:41.189 Copy: Supported 00:08:41.189 Volatile Write Cache: Present 00:08:41.189 Atomic Write Unit (Normal): 1 00:08:41.189 Atomic Write Unit (PFail): 1 00:08:41.189 Atomic Compare & Write Unit: 1 00:08:41.189 Fused Compare & Write: Not Supported 00:08:41.189 Scatter-Gather List 00:08:41.189 SGL Command Set: Supported 00:08:41.189 SGL Keyed: Not Supported 00:08:41.189 SGL Bit Bucket Descriptor: Not Supported 00:08:41.189 SGL Metadata Pointer: Not Supported 00:08:41.189 Oversized SGL: Not Supported 00:08:41.189 SGL Metadata Address: Not Supported 00:08:41.189 SGL Offset: Not Supported 00:08:41.189 Transport SGL Data Block: Not Supported 00:08:41.189 Replay Protected Memory Block: Not Supported 00:08:41.189 00:08:41.189 Firmware Slot Information 00:08:41.189 ========================= 00:08:41.189 Active slot: 1 00:08:41.189 Slot 1 Firmware Revision: 1.0 00:08:41.189 00:08:41.190 00:08:41.190 Commands Supported and Effects 00:08:41.190 ============================== 00:08:41.190 Admin Commands 00:08:41.190 -------------- 00:08:41.190 Delete I/O Submission Queue (00h): Supported 00:08:41.190 Create I/O Submission Queue (01h): Supported 00:08:41.190 Get Log Page (02h): Supported 00:08:41.190 Delete I/O Completion Queue (04h): Supported 00:08:41.190 Create I/O Completion Queue (05h): Supported 00:08:41.190 Identify (06h): Supported 00:08:41.190 Abort (08h): Supported 00:08:41.190 Set Features (09h): Supported 00:08:41.190 Get Features (0Ah): Supported 00:08:41.190 Asynchronous Event Request (0Ch): Supported 00:08:41.190 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:41.190 Directive Send (19h): Supported 00:08:41.190 Directive Receive (1Ah): Supported 00:08:41.190 Virtualization Management (1Ch): Supported 00:08:41.190 Doorbell Buffer Config (7Ch): Supported 00:08:41.190 Format NVM (80h): Supported LBA-Change 00:08:41.190 I/O Commands 00:08:41.190 ------------ 00:08:41.190 Flush (00h): Supported LBA-Change 00:08:41.190 Write (01h): Supported LBA-Change 00:08:41.190 Read (02h): Supported 00:08:41.190 Compare (05h): Supported 00:08:41.190 Write Zeroes (08h): Supported LBA-Change 00:08:41.190 Dataset Management (09h): Supported LBA-Change 00:08:41.190 Unknown (0Ch): Supported 00:08:41.190 Unknown (12h): Supported 00:08:41.190 Copy (19h): Supported LBA-Change 00:08:41.190 Unknown (1Dh): Supported LBA-Change 00:08:41.190 00:08:41.190 Error Log 00:08:41.190 ========= 00:08:41.190 00:08:41.190 Arbitration 00:08:41.190 =========== 00:08:41.190 Arbitration Burst: no limit 00:08:41.190 00:08:41.190 Power Management 00:08:41.190 ================ 00:08:41.190 Number of Power States: 1 00:08:41.190 Current Power State: Power State #0 00:08:41.190 Power State #0: 00:08:41.190 Max Power: 25.00 W 00:08:41.190 Non-Operational State: Operational 00:08:41.190 Entry Latency: 16 microseconds 00:08:41.190 Exit Latency: 4 microseconds 00:08:41.190 Relative Read Throughput: 0 00:08:41.190 Relative Read Latency: 0 00:08:41.190 Relative Write Throughput: 0 00:08:41.190 Relative Write Latency: 0 00:08:41.190 Idle Power: Not Reported 00:08:41.190 Active Power: Not Reported 00:08:41.190 Non-Operational Permissive Mode: Not Supported 00:08:41.190 00:08:41.190 Health Information 00:08:41.190 ================== 00:08:41.190 Critical Warnings: 00:08:41.190 Available Spare Space: OK 00:08:41.190 Temperature: OK 00:08:41.190 Device Reliability: OK 00:08:41.190 Read Only: No 00:08:41.190 Volatile Memory Backup: OK 00:08:41.190 Current Temperature: 323 Kelvin (50 Celsius) 00:08:41.190 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:41.190 Available Spare: 0% 00:08:41.190 Available Spare Threshold: 0% 00:08:41.190 Life Percentage Used: 0% 00:08:41.190 Data Units Read: 1055 00:08:41.190 Data Units Written: 920 00:08:41.190 Host Read Commands: 54905 00:08:41.190 Host Write Commands: 53637 00:08:41.190 Controller Busy Time: 0 minutes 00:08:41.190 Power Cycles: 0 00:08:41.190 Power On Hours: 0 hours 00:08:41.190 Unsafe Shutdowns: 0 00:08:41.190 Unrecoverable Media Errors: 0 00:08:41.190 Lifetime Error Log Entries: 0 00:08:41.190 Warning Temperature Time: 0 minutes 00:08:41.190 Critical Temperature Time: 0 minutes 00:08:41.190 00:08:41.190 Number of Queues 00:08:41.190 ================ 00:08:41.190 Number of I/O Submission Queues: 64 00:08:41.190 Number of I/O Completion Queues: 64 00:08:41.190 00:08:41.190 ZNS Specific Controller Data 00:08:41.190 ============================ 00:08:41.190 Zone Append Size Limit: 0 00:08:41.190 00:08:41.190 00:08:41.190 Active Namespaces 00:08:41.190 ================= 00:08:41.190 Namespace ID:1 00:08:41.190 Error Recovery Timeout: Unlimited 00:08:41.190 Command Set Identifier: NVM (00h) 00:08:41.190 Deallocate: Supported 00:08:41.190 Deallocated/Unwritten Error: Supported 00:08:41.190 Deallocated Read Value: All 0x00 00:08:41.190 Deallocate in Write Zeroes: Not Supported 00:08:41.190 Deallocated Guard Field: 0xFFFF 00:08:41.190 Flush: Supported 00:08:41.190 Reservation: Not Supported 00:08:41.190 Namespace Sharing Capabilities: Private 00:08:41.190 Size (in LBAs): 1310720 (5GiB) 00:08:41.190 Capacity (in LBAs): 1310720 (5GiB) 00:08:41.190 Utilization (in LBAs): 1310720 (5GiB) 00:08:41.190 Thin Provisioning: Not Supported 00:08:41.190 Per-NS Atomic Units: No 00:08:41.190 Maximum Single Source Range Length: 128 00:08:41.190 Maximum Copy Length: 128 00:08:41.190 Maximum Source Range Count: 128 00:08:41.190 NGUID/EUI64 Never Reused: No 00:08:41.190 Namespace Write Protected: No 00:08:41.190 Number of LBA Formats: 8 00:08:41.190 Current LBA Format: LBA Format #04 00:08:41.190 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:41.190 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:41.190 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:41.190 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:41.190 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:41.190 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:41.190 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:41.190 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:41.190 00:08:41.190 NVM Specific Namespace Data 00:08:41.190 =========================== 00:08:41.190 Logical Block Storage Tag Mask: 0 00:08:41.190 Protection Information Capabilities: 00:08:41.190 16b Guard Protection Information Storage Tag Support: No 00:08:41.190 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:41.190 Storage Tag Check Read Support: No 00:08:41.190 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:41.190 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:41.190 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:41.190 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:41.190 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:41.190 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:41.190 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:41.190 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:41.190 04:25:37 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:08:41.190 04:25:37 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' -i 0 00:08:41.452 ===================================================== 00:08:41.452 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:41.452 ===================================================== 00:08:41.452 Controller Capabilities/Features 00:08:41.452 ================================ 00:08:41.452 Vendor ID: 1b36 00:08:41.452 Subsystem Vendor ID: 1af4 00:08:41.452 Serial Number: 12342 00:08:41.452 Model Number: QEMU NVMe Ctrl 00:08:41.452 Firmware Version: 8.0.0 00:08:41.452 Recommended Arb Burst: 6 00:08:41.452 IEEE OUI Identifier: 00 54 52 00:08:41.452 Multi-path I/O 00:08:41.452 May have multiple subsystem ports: No 00:08:41.452 May have multiple controllers: No 00:08:41.452 Associated with SR-IOV VF: No 00:08:41.452 Max Data Transfer Size: 524288 00:08:41.452 Max Number of Namespaces: 256 00:08:41.452 Max Number of I/O Queues: 64 00:08:41.452 NVMe Specification Version (VS): 1.4 00:08:41.452 NVMe Specification Version (Identify): 1.4 00:08:41.452 Maximum Queue Entries: 2048 00:08:41.452 Contiguous Queues Required: Yes 00:08:41.452 Arbitration Mechanisms Supported 00:08:41.452 Weighted Round Robin: Not Supported 00:08:41.452 Vendor Specific: Not Supported 00:08:41.452 Reset Timeout: 7500 ms 00:08:41.452 Doorbell Stride: 4 bytes 00:08:41.452 NVM Subsystem Reset: Not Supported 00:08:41.452 Command Sets Supported 00:08:41.452 NVM Command Set: Supported 00:08:41.452 Boot Partition: Not Supported 00:08:41.452 Memory Page Size Minimum: 4096 bytes 00:08:41.452 Memory Page Size Maximum: 65536 bytes 00:08:41.452 Persistent Memory Region: Not Supported 00:08:41.452 Optional Asynchronous Events Supported 00:08:41.452 Namespace Attribute Notices: Supported 00:08:41.452 Firmware Activation Notices: Not Supported 00:08:41.452 ANA Change Notices: Not Supported 00:08:41.452 PLE Aggregate Log Change Notices: Not Supported 00:08:41.452 LBA Status Info Alert Notices: Not Supported 00:08:41.452 EGE Aggregate Log Change Notices: Not Supported 00:08:41.452 Normal NVM Subsystem Shutdown event: Not Supported 00:08:41.452 Zone Descriptor Change Notices: Not Supported 00:08:41.452 Discovery Log Change Notices: Not Supported 00:08:41.452 Controller Attributes 00:08:41.452 128-bit Host Identifier: Not Supported 00:08:41.452 Non-Operational Permissive Mode: Not Supported 00:08:41.452 NVM Sets: Not Supported 00:08:41.452 Read Recovery Levels: Not Supported 00:08:41.452 Endurance Groups: Not Supported 00:08:41.452 Predictable Latency Mode: Not Supported 00:08:41.452 Traffic Based Keep ALive: Not Supported 00:08:41.452 Namespace Granularity: Not Supported 00:08:41.452 SQ Associations: Not Supported 00:08:41.452 UUID List: Not Supported 00:08:41.452 Multi-Domain Subsystem: Not Supported 00:08:41.452 Fixed Capacity Management: Not Supported 00:08:41.452 Variable Capacity Management: Not Supported 00:08:41.452 Delete Endurance Group: Not Supported 00:08:41.452 Delete NVM Set: Not Supported 00:08:41.452 Extended LBA Formats Supported: Supported 00:08:41.452 Flexible Data Placement Supported: Not Supported 00:08:41.452 00:08:41.453 Controller Memory Buffer Support 00:08:41.453 ================================ 00:08:41.453 Supported: No 00:08:41.453 00:08:41.453 Persistent Memory Region Support 00:08:41.453 ================================ 00:08:41.453 Supported: No 00:08:41.453 00:08:41.453 Admin Command Set Attributes 00:08:41.453 ============================ 00:08:41.453 Security Send/Receive: Not Supported 00:08:41.453 Format NVM: Supported 00:08:41.453 Firmware Activate/Download: Not Supported 00:08:41.453 Namespace Management: Supported 00:08:41.453 Device Self-Test: Not Supported 00:08:41.453 Directives: Supported 00:08:41.453 NVMe-MI: Not Supported 00:08:41.453 Virtualization Management: Not Supported 00:08:41.453 Doorbell Buffer Config: Supported 00:08:41.453 Get LBA Status Capability: Not Supported 00:08:41.453 Command & Feature Lockdown Capability: Not Supported 00:08:41.453 Abort Command Limit: 4 00:08:41.453 Async Event Request Limit: 4 00:08:41.453 Number of Firmware Slots: N/A 00:08:41.453 Firmware Slot 1 Read-Only: N/A 00:08:41.453 Firmware Activation Without Reset: N/A 00:08:41.453 Multiple Update Detection Support: N/A 00:08:41.453 Firmware Update Granularity: No Information Provided 00:08:41.453 Per-Namespace SMART Log: Yes 00:08:41.453 Asymmetric Namespace Access Log Page: Not Supported 00:08:41.453 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:08:41.453 Command Effects Log Page: Supported 00:08:41.453 Get Log Page Extended Data: Supported 00:08:41.453 Telemetry Log Pages: Not Supported 00:08:41.453 Persistent Event Log Pages: Not Supported 00:08:41.453 Supported Log Pages Log Page: May Support 00:08:41.453 Commands Supported & Effects Log Page: Not Supported 00:08:41.453 Feature Identifiers & Effects Log Page:May Support 00:08:41.453 NVMe-MI Commands & Effects Log Page: May Support 00:08:41.453 Data Area 4 for Telemetry Log: Not Supported 00:08:41.453 Error Log Page Entries Supported: 1 00:08:41.453 Keep Alive: Not Supported 00:08:41.453 00:08:41.453 NVM Command Set Attributes 00:08:41.453 ========================== 00:08:41.453 Submission Queue Entry Size 00:08:41.453 Max: 64 00:08:41.453 Min: 64 00:08:41.453 Completion Queue Entry Size 00:08:41.453 Max: 16 00:08:41.453 Min: 16 00:08:41.453 Number of Namespaces: 256 00:08:41.453 Compare Command: Supported 00:08:41.453 Write Uncorrectable Command: Not Supported 00:08:41.453 Dataset Management Command: Supported 00:08:41.453 Write Zeroes Command: Supported 00:08:41.453 Set Features Save Field: Supported 00:08:41.453 Reservations: Not Supported 00:08:41.453 Timestamp: Supported 00:08:41.453 Copy: Supported 00:08:41.453 Volatile Write Cache: Present 00:08:41.453 Atomic Write Unit (Normal): 1 00:08:41.453 Atomic Write Unit (PFail): 1 00:08:41.453 Atomic Compare & Write Unit: 1 00:08:41.453 Fused Compare & Write: Not Supported 00:08:41.453 Scatter-Gather List 00:08:41.453 SGL Command Set: Supported 00:08:41.453 SGL Keyed: Not Supported 00:08:41.453 SGL Bit Bucket Descriptor: Not Supported 00:08:41.453 SGL Metadata Pointer: Not Supported 00:08:41.453 Oversized SGL: Not Supported 00:08:41.453 SGL Metadata Address: Not Supported 00:08:41.453 SGL Offset: Not Supported 00:08:41.453 Transport SGL Data Block: Not Supported 00:08:41.453 Replay Protected Memory Block: Not Supported 00:08:41.453 00:08:41.453 Firmware Slot Information 00:08:41.453 ========================= 00:08:41.453 Active slot: 1 00:08:41.453 Slot 1 Firmware Revision: 1.0 00:08:41.453 00:08:41.453 00:08:41.453 Commands Supported and Effects 00:08:41.453 ============================== 00:08:41.453 Admin Commands 00:08:41.453 -------------- 00:08:41.453 Delete I/O Submission Queue (00h): Supported 00:08:41.453 Create I/O Submission Queue (01h): Supported 00:08:41.453 Get Log Page (02h): Supported 00:08:41.453 Delete I/O Completion Queue (04h): Supported 00:08:41.453 Create I/O Completion Queue (05h): Supported 00:08:41.453 Identify (06h): Supported 00:08:41.453 Abort (08h): Supported 00:08:41.453 Set Features (09h): Supported 00:08:41.453 Get Features (0Ah): Supported 00:08:41.453 Asynchronous Event Request (0Ch): Supported 00:08:41.453 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:41.453 Directive Send (19h): Supported 00:08:41.453 Directive Receive (1Ah): Supported 00:08:41.453 Virtualization Management (1Ch): Supported 00:08:41.453 Doorbell Buffer Config (7Ch): Supported 00:08:41.453 Format NVM (80h): Supported LBA-Change 00:08:41.453 I/O Commands 00:08:41.453 ------------ 00:08:41.453 Flush (00h): Supported LBA-Change 00:08:41.453 Write (01h): Supported LBA-Change 00:08:41.453 Read (02h): Supported 00:08:41.453 Compare (05h): Supported 00:08:41.453 Write Zeroes (08h): Supported LBA-Change 00:08:41.453 Dataset Management (09h): Supported LBA-Change 00:08:41.453 Unknown (0Ch): Supported 00:08:41.453 Unknown (12h): Supported 00:08:41.453 Copy (19h): Supported LBA-Change 00:08:41.453 Unknown (1Dh): Supported LBA-Change 00:08:41.453 00:08:41.453 Error Log 00:08:41.453 ========= 00:08:41.453 00:08:41.453 Arbitration 00:08:41.453 =========== 00:08:41.453 Arbitration Burst: no limit 00:08:41.453 00:08:41.453 Power Management 00:08:41.453 ================ 00:08:41.453 Number of Power States: 1 00:08:41.453 Current Power State: Power State #0 00:08:41.453 Power State #0: 00:08:41.453 Max Power: 25.00 W 00:08:41.453 Non-Operational State: Operational 00:08:41.453 Entry Latency: 16 microseconds 00:08:41.453 Exit Latency: 4 microseconds 00:08:41.453 Relative Read Throughput: 0 00:08:41.453 Relative Read Latency: 0 00:08:41.453 Relative Write Throughput: 0 00:08:41.453 Relative Write Latency: 0 00:08:41.453 Idle Power: Not Reported 00:08:41.453 Active Power: Not Reported 00:08:41.453 Non-Operational Permissive Mode: Not Supported 00:08:41.453 00:08:41.453 Health Information 00:08:41.453 ================== 00:08:41.453 Critical Warnings: 00:08:41.453 Available Spare Space: OK 00:08:41.453 Temperature: OK 00:08:41.453 Device Reliability: OK 00:08:41.453 Read Only: No 00:08:41.453 Volatile Memory Backup: OK 00:08:41.453 Current Temperature: 323 Kelvin (50 Celsius) 00:08:41.453 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:41.453 Available Spare: 0% 00:08:41.453 Available Spare Threshold: 0% 00:08:41.453 Life Percentage Used: 0% 00:08:41.453 Data Units Read: 2270 00:08:41.453 Data Units Written: 2057 00:08:41.453 Host Read Commands: 114631 00:08:41.453 Host Write Commands: 112900 00:08:41.453 Controller Busy Time: 0 minutes 00:08:41.453 Power Cycles: 0 00:08:41.453 Power On Hours: 0 hours 00:08:41.453 Unsafe Shutdowns: 0 00:08:41.453 Unrecoverable Media Errors: 0 00:08:41.453 Lifetime Error Log Entries: 0 00:08:41.453 Warning Temperature Time: 0 minutes 00:08:41.453 Critical Temperature Time: 0 minutes 00:08:41.453 00:08:41.453 Number of Queues 00:08:41.453 ================ 00:08:41.453 Number of I/O Submission Queues: 64 00:08:41.453 Number of I/O Completion Queues: 64 00:08:41.453 00:08:41.453 ZNS Specific Controller Data 00:08:41.453 ============================ 00:08:41.453 Zone Append Size Limit: 0 00:08:41.453 00:08:41.453 00:08:41.453 Active Namespaces 00:08:41.453 ================= 00:08:41.453 Namespace ID:1 00:08:41.453 Error Recovery Timeout: Unlimited 00:08:41.453 Command Set Identifier: NVM (00h) 00:08:41.453 Deallocate: Supported 00:08:41.453 Deallocated/Unwritten Error: Supported 00:08:41.453 Deallocated Read Value: All 0x00 00:08:41.453 Deallocate in Write Zeroes: Not Supported 00:08:41.453 Deallocated Guard Field: 0xFFFF 00:08:41.453 Flush: Supported 00:08:41.453 Reservation: Not Supported 00:08:41.453 Namespace Sharing Capabilities: Private 00:08:41.453 Size (in LBAs): 1048576 (4GiB) 00:08:41.453 Capacity (in LBAs): 1048576 (4GiB) 00:08:41.453 Utilization (in LBAs): 1048576 (4GiB) 00:08:41.453 Thin Provisioning: Not Supported 00:08:41.453 Per-NS Atomic Units: No 00:08:41.453 Maximum Single Source Range Length: 128 00:08:41.453 Maximum Copy Length: 128 00:08:41.453 Maximum Source Range Count: 128 00:08:41.453 NGUID/EUI64 Never Reused: No 00:08:41.453 Namespace Write Protected: No 00:08:41.453 Number of LBA Formats: 8 00:08:41.453 Current LBA Format: LBA Format #04 00:08:41.453 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:41.453 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:41.453 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:41.453 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:41.453 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:41.453 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:41.453 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:41.454 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:41.454 00:08:41.454 NVM Specific Namespace Data 00:08:41.454 =========================== 00:08:41.454 Logical Block Storage Tag Mask: 0 00:08:41.454 Protection Information Capabilities: 00:08:41.454 16b Guard Protection Information Storage Tag Support: No 00:08:41.454 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:41.454 Storage Tag Check Read Support: No 00:08:41.454 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:41.454 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:41.454 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:41.454 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:41.454 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:41.454 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:41.454 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:41.454 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:41.454 Namespace ID:2 00:08:41.454 Error Recovery Timeout: Unlimited 00:08:41.454 Command Set Identifier: NVM (00h) 00:08:41.454 Deallocate: Supported 00:08:41.454 Deallocated/Unwritten Error: Supported 00:08:41.454 Deallocated Read Value: All 0x00 00:08:41.454 Deallocate in Write Zeroes: Not Supported 00:08:41.454 Deallocated Guard Field: 0xFFFF 00:08:41.454 Flush: Supported 00:08:41.454 Reservation: Not Supported 00:08:41.454 Namespace Sharing Capabilities: Private 00:08:41.454 Size (in LBAs): 1048576 (4GiB) 00:08:41.454 Capacity (in LBAs): 1048576 (4GiB) 00:08:41.454 Utilization (in LBAs): 1048576 (4GiB) 00:08:41.454 Thin Provisioning: Not Supported 00:08:41.454 Per-NS Atomic Units: No 00:08:41.454 Maximum Single Source Range Length: 128 00:08:41.454 Maximum Copy Length: 128 00:08:41.454 Maximum Source Range Count: 128 00:08:41.454 NGUID/EUI64 Never Reused: No 00:08:41.454 Namespace Write Protected: No 00:08:41.454 Number of LBA Formats: 8 00:08:41.454 Current LBA Format: LBA Format #04 00:08:41.454 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:41.454 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:41.454 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:41.454 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:41.454 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:41.454 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:41.454 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:41.454 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:41.454 00:08:41.454 NVM Specific Namespace Data 00:08:41.454 =========================== 00:08:41.454 Logical Block Storage Tag Mask: 0 00:08:41.454 Protection Information Capabilities: 00:08:41.454 16b Guard Protection Information Storage Tag Support: No 00:08:41.454 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:41.454 Storage Tag Check Read Support: No 00:08:41.454 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:41.454 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:41.454 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:41.454 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:41.454 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:41.454 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:41.454 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:41.454 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:41.454 Namespace ID:3 00:08:41.454 Error Recovery Timeout: Unlimited 00:08:41.454 Command Set Identifier: NVM (00h) 00:08:41.454 Deallocate: Supported 00:08:41.454 Deallocated/Unwritten Error: Supported 00:08:41.454 Deallocated Read Value: All 0x00 00:08:41.454 Deallocate in Write Zeroes: Not Supported 00:08:41.454 Deallocated Guard Field: 0xFFFF 00:08:41.454 Flush: Supported 00:08:41.454 Reservation: Not Supported 00:08:41.454 Namespace Sharing Capabilities: Private 00:08:41.454 Size (in LBAs): 1048576 (4GiB) 00:08:41.454 Capacity (in LBAs): 1048576 (4GiB) 00:08:41.454 Utilization (in LBAs): 1048576 (4GiB) 00:08:41.454 Thin Provisioning: Not Supported 00:08:41.454 Per-NS Atomic Units: No 00:08:41.454 Maximum Single Source Range Length: 128 00:08:41.454 Maximum Copy Length: 128 00:08:41.454 Maximum Source Range Count: 128 00:08:41.454 NGUID/EUI64 Never Reused: No 00:08:41.454 Namespace Write Protected: No 00:08:41.454 Number of LBA Formats: 8 00:08:41.454 Current LBA Format: LBA Format #04 00:08:41.454 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:41.454 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:41.454 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:41.454 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:41.454 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:41.454 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:41.454 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:41.454 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:41.454 00:08:41.454 NVM Specific Namespace Data 00:08:41.454 =========================== 00:08:41.454 Logical Block Storage Tag Mask: 0 00:08:41.454 Protection Information Capabilities: 00:08:41.454 16b Guard Protection Information Storage Tag Support: No 00:08:41.454 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:41.454 Storage Tag Check Read Support: No 00:08:41.454 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:41.454 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:41.454 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:41.454 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:41.454 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:41.454 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:41.454 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:41.454 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:41.454 04:25:37 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:08:41.454 04:25:37 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' -i 0 00:08:41.716 ===================================================== 00:08:41.716 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:41.716 ===================================================== 00:08:41.716 Controller Capabilities/Features 00:08:41.716 ================================ 00:08:41.716 Vendor ID: 1b36 00:08:41.716 Subsystem Vendor ID: 1af4 00:08:41.716 Serial Number: 12343 00:08:41.716 Model Number: QEMU NVMe Ctrl 00:08:41.716 Firmware Version: 8.0.0 00:08:41.716 Recommended Arb Burst: 6 00:08:41.716 IEEE OUI Identifier: 00 54 52 00:08:41.716 Multi-path I/O 00:08:41.716 May have multiple subsystem ports: No 00:08:41.716 May have multiple controllers: Yes 00:08:41.716 Associated with SR-IOV VF: No 00:08:41.716 Max Data Transfer Size: 524288 00:08:41.716 Max Number of Namespaces: 256 00:08:41.716 Max Number of I/O Queues: 64 00:08:41.716 NVMe Specification Version (VS): 1.4 00:08:41.716 NVMe Specification Version (Identify): 1.4 00:08:41.716 Maximum Queue Entries: 2048 00:08:41.716 Contiguous Queues Required: Yes 00:08:41.716 Arbitration Mechanisms Supported 00:08:41.716 Weighted Round Robin: Not Supported 00:08:41.716 Vendor Specific: Not Supported 00:08:41.716 Reset Timeout: 7500 ms 00:08:41.716 Doorbell Stride: 4 bytes 00:08:41.716 NVM Subsystem Reset: Not Supported 00:08:41.716 Command Sets Supported 00:08:41.716 NVM Command Set: Supported 00:08:41.716 Boot Partition: Not Supported 00:08:41.716 Memory Page Size Minimum: 4096 bytes 00:08:41.716 Memory Page Size Maximum: 65536 bytes 00:08:41.716 Persistent Memory Region: Not Supported 00:08:41.716 Optional Asynchronous Events Supported 00:08:41.716 Namespace Attribute Notices: Supported 00:08:41.716 Firmware Activation Notices: Not Supported 00:08:41.716 ANA Change Notices: Not Supported 00:08:41.716 PLE Aggregate Log Change Notices: Not Supported 00:08:41.716 LBA Status Info Alert Notices: Not Supported 00:08:41.716 EGE Aggregate Log Change Notices: Not Supported 00:08:41.716 Normal NVM Subsystem Shutdown event: Not Supported 00:08:41.716 Zone Descriptor Change Notices: Not Supported 00:08:41.716 Discovery Log Change Notices: Not Supported 00:08:41.716 Controller Attributes 00:08:41.716 128-bit Host Identifier: Not Supported 00:08:41.716 Non-Operational Permissive Mode: Not Supported 00:08:41.716 NVM Sets: Not Supported 00:08:41.716 Read Recovery Levels: Not Supported 00:08:41.716 Endurance Groups: Supported 00:08:41.716 Predictable Latency Mode: Not Supported 00:08:41.716 Traffic Based Keep ALive: Not Supported 00:08:41.716 Namespace Granularity: Not Supported 00:08:41.716 SQ Associations: Not Supported 00:08:41.716 UUID List: Not Supported 00:08:41.716 Multi-Domain Subsystem: Not Supported 00:08:41.716 Fixed Capacity Management: Not Supported 00:08:41.716 Variable Capacity Management: Not Supported 00:08:41.716 Delete Endurance Group: Not Supported 00:08:41.716 Delete NVM Set: Not Supported 00:08:41.716 Extended LBA Formats Supported: Supported 00:08:41.716 Flexible Data Placement Supported: Supported 00:08:41.716 00:08:41.716 Controller Memory Buffer Support 00:08:41.716 ================================ 00:08:41.716 Supported: No 00:08:41.716 00:08:41.716 Persistent Memory Region Support 00:08:41.716 ================================ 00:08:41.716 Supported: No 00:08:41.716 00:08:41.716 Admin Command Set Attributes 00:08:41.716 ============================ 00:08:41.716 Security Send/Receive: Not Supported 00:08:41.716 Format NVM: Supported 00:08:41.716 Firmware Activate/Download: Not Supported 00:08:41.716 Namespace Management: Supported 00:08:41.716 Device Self-Test: Not Supported 00:08:41.716 Directives: Supported 00:08:41.716 NVMe-MI: Not Supported 00:08:41.716 Virtualization Management: Not Supported 00:08:41.716 Doorbell Buffer Config: Supported 00:08:41.716 Get LBA Status Capability: Not Supported 00:08:41.716 Command & Feature Lockdown Capability: Not Supported 00:08:41.716 Abort Command Limit: 4 00:08:41.716 Async Event Request Limit: 4 00:08:41.716 Number of Firmware Slots: N/A 00:08:41.716 Firmware Slot 1 Read-Only: N/A 00:08:41.716 Firmware Activation Without Reset: N/A 00:08:41.716 Multiple Update Detection Support: N/A 00:08:41.716 Firmware Update Granularity: No Information Provided 00:08:41.716 Per-Namespace SMART Log: Yes 00:08:41.716 Asymmetric Namespace Access Log Page: Not Supported 00:08:41.716 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:08:41.716 Command Effects Log Page: Supported 00:08:41.716 Get Log Page Extended Data: Supported 00:08:41.716 Telemetry Log Pages: Not Supported 00:08:41.716 Persistent Event Log Pages: Not Supported 00:08:41.716 Supported Log Pages Log Page: May Support 00:08:41.716 Commands Supported & Effects Log Page: Not Supported 00:08:41.716 Feature Identifiers & Effects Log Page:May Support 00:08:41.716 NVMe-MI Commands & Effects Log Page: May Support 00:08:41.717 Data Area 4 for Telemetry Log: Not Supported 00:08:41.717 Error Log Page Entries Supported: 1 00:08:41.717 Keep Alive: Not Supported 00:08:41.717 00:08:41.717 NVM Command Set Attributes 00:08:41.717 ========================== 00:08:41.717 Submission Queue Entry Size 00:08:41.717 Max: 64 00:08:41.717 Min: 64 00:08:41.717 Completion Queue Entry Size 00:08:41.717 Max: 16 00:08:41.717 Min: 16 00:08:41.717 Number of Namespaces: 256 00:08:41.717 Compare Command: Supported 00:08:41.717 Write Uncorrectable Command: Not Supported 00:08:41.717 Dataset Management Command: Supported 00:08:41.717 Write Zeroes Command: Supported 00:08:41.717 Set Features Save Field: Supported 00:08:41.717 Reservations: Not Supported 00:08:41.717 Timestamp: Supported 00:08:41.717 Copy: Supported 00:08:41.717 Volatile Write Cache: Present 00:08:41.717 Atomic Write Unit (Normal): 1 00:08:41.717 Atomic Write Unit (PFail): 1 00:08:41.717 Atomic Compare & Write Unit: 1 00:08:41.717 Fused Compare & Write: Not Supported 00:08:41.717 Scatter-Gather List 00:08:41.717 SGL Command Set: Supported 00:08:41.717 SGL Keyed: Not Supported 00:08:41.717 SGL Bit Bucket Descriptor: Not Supported 00:08:41.717 SGL Metadata Pointer: Not Supported 00:08:41.717 Oversized SGL: Not Supported 00:08:41.717 SGL Metadata Address: Not Supported 00:08:41.717 SGL Offset: Not Supported 00:08:41.717 Transport SGL Data Block: Not Supported 00:08:41.717 Replay Protected Memory Block: Not Supported 00:08:41.717 00:08:41.717 Firmware Slot Information 00:08:41.717 ========================= 00:08:41.717 Active slot: 1 00:08:41.717 Slot 1 Firmware Revision: 1.0 00:08:41.717 00:08:41.717 00:08:41.717 Commands Supported and Effects 00:08:41.717 ============================== 00:08:41.717 Admin Commands 00:08:41.717 -------------- 00:08:41.717 Delete I/O Submission Queue (00h): Supported 00:08:41.717 Create I/O Submission Queue (01h): Supported 00:08:41.717 Get Log Page (02h): Supported 00:08:41.717 Delete I/O Completion Queue (04h): Supported 00:08:41.717 Create I/O Completion Queue (05h): Supported 00:08:41.717 Identify (06h): Supported 00:08:41.717 Abort (08h): Supported 00:08:41.717 Set Features (09h): Supported 00:08:41.717 Get Features (0Ah): Supported 00:08:41.717 Asynchronous Event Request (0Ch): Supported 00:08:41.717 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:41.717 Directive Send (19h): Supported 00:08:41.717 Directive Receive (1Ah): Supported 00:08:41.717 Virtualization Management (1Ch): Supported 00:08:41.717 Doorbell Buffer Config (7Ch): Supported 00:08:41.717 Format NVM (80h): Supported LBA-Change 00:08:41.717 I/O Commands 00:08:41.717 ------------ 00:08:41.717 Flush (00h): Supported LBA-Change 00:08:41.717 Write (01h): Supported LBA-Change 00:08:41.717 Read (02h): Supported 00:08:41.717 Compare (05h): Supported 00:08:41.717 Write Zeroes (08h): Supported LBA-Change 00:08:41.717 Dataset Management (09h): Supported LBA-Change 00:08:41.717 Unknown (0Ch): Supported 00:08:41.717 Unknown (12h): Supported 00:08:41.717 Copy (19h): Supported LBA-Change 00:08:41.717 Unknown (1Dh): Supported LBA-Change 00:08:41.717 00:08:41.717 Error Log 00:08:41.717 ========= 00:08:41.717 00:08:41.717 Arbitration 00:08:41.717 =========== 00:08:41.717 Arbitration Burst: no limit 00:08:41.717 00:08:41.717 Power Management 00:08:41.717 ================ 00:08:41.717 Number of Power States: 1 00:08:41.717 Current Power State: Power State #0 00:08:41.717 Power State #0: 00:08:41.717 Max Power: 25.00 W 00:08:41.717 Non-Operational State: Operational 00:08:41.717 Entry Latency: 16 microseconds 00:08:41.717 Exit Latency: 4 microseconds 00:08:41.717 Relative Read Throughput: 0 00:08:41.717 Relative Read Latency: 0 00:08:41.717 Relative Write Throughput: 0 00:08:41.717 Relative Write Latency: 0 00:08:41.717 Idle Power: Not Reported 00:08:41.717 Active Power: Not Reported 00:08:41.717 Non-Operational Permissive Mode: Not Supported 00:08:41.717 00:08:41.717 Health Information 00:08:41.717 ================== 00:08:41.717 Critical Warnings: 00:08:41.717 Available Spare Space: OK 00:08:41.717 Temperature: OK 00:08:41.717 Device Reliability: OK 00:08:41.717 Read Only: No 00:08:41.717 Volatile Memory Backup: OK 00:08:41.717 Current Temperature: 323 Kelvin (50 Celsius) 00:08:41.717 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:41.717 Available Spare: 0% 00:08:41.717 Available Spare Threshold: 0% 00:08:41.717 Life Percentage Used: 0% 00:08:41.717 Data Units Read: 853 00:08:41.717 Data Units Written: 782 00:08:41.717 Host Read Commands: 39079 00:08:41.717 Host Write Commands: 38503 00:08:41.717 Controller Busy Time: 0 minutes 00:08:41.717 Power Cycles: 0 00:08:41.717 Power On Hours: 0 hours 00:08:41.717 Unsafe Shutdowns: 0 00:08:41.717 Unrecoverable Media Errors: 0 00:08:41.717 Lifetime Error Log Entries: 0 00:08:41.717 Warning Temperature Time: 0 minutes 00:08:41.717 Critical Temperature Time: 0 minutes 00:08:41.717 00:08:41.717 Number of Queues 00:08:41.717 ================ 00:08:41.717 Number of I/O Submission Queues: 64 00:08:41.717 Number of I/O Completion Queues: 64 00:08:41.717 00:08:41.717 ZNS Specific Controller Data 00:08:41.717 ============================ 00:08:41.717 Zone Append Size Limit: 0 00:08:41.717 00:08:41.717 00:08:41.717 Active Namespaces 00:08:41.717 ================= 00:08:41.717 Namespace ID:1 00:08:41.717 Error Recovery Timeout: Unlimited 00:08:41.717 Command Set Identifier: NVM (00h) 00:08:41.717 Deallocate: Supported 00:08:41.717 Deallocated/Unwritten Error: Supported 00:08:41.717 Deallocated Read Value: All 0x00 00:08:41.717 Deallocate in Write Zeroes: Not Supported 00:08:41.717 Deallocated Guard Field: 0xFFFF 00:08:41.717 Flush: Supported 00:08:41.717 Reservation: Not Supported 00:08:41.717 Namespace Sharing Capabilities: Multiple Controllers 00:08:41.717 Size (in LBAs): 262144 (1GiB) 00:08:41.717 Capacity (in LBAs): 262144 (1GiB) 00:08:41.717 Utilization (in LBAs): 262144 (1GiB) 00:08:41.717 Thin Provisioning: Not Supported 00:08:41.717 Per-NS Atomic Units: No 00:08:41.717 Maximum Single Source Range Length: 128 00:08:41.717 Maximum Copy Length: 128 00:08:41.717 Maximum Source Range Count: 128 00:08:41.717 NGUID/EUI64 Never Reused: No 00:08:41.717 Namespace Write Protected: No 00:08:41.717 Endurance group ID: 1 00:08:41.717 Number of LBA Formats: 8 00:08:41.717 Current LBA Format: LBA Format #04 00:08:41.717 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:41.717 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:41.717 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:41.717 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:41.717 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:41.717 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:41.717 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:41.717 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:41.717 00:08:41.717 Get Feature FDP: 00:08:41.717 ================ 00:08:41.717 Enabled: Yes 00:08:41.717 FDP configuration index: 0 00:08:41.717 00:08:41.717 FDP configurations log page 00:08:41.717 =========================== 00:08:41.717 Number of FDP configurations: 1 00:08:41.717 Version: 0 00:08:41.717 Size: 112 00:08:41.717 FDP Configuration Descriptor: 0 00:08:41.717 Descriptor Size: 96 00:08:41.717 Reclaim Group Identifier format: 2 00:08:41.717 FDP Volatile Write Cache: Not Present 00:08:41.717 FDP Configuration: Valid 00:08:41.717 Vendor Specific Size: 0 00:08:41.717 Number of Reclaim Groups: 2 00:08:41.717 Number of Recalim Unit Handles: 8 00:08:41.717 Max Placement Identifiers: 128 00:08:41.717 Number of Namespaces Suppprted: 256 00:08:41.717 Reclaim unit Nominal Size: 6000000 bytes 00:08:41.717 Estimated Reclaim Unit Time Limit: Not Reported 00:08:41.717 RUH Desc #000: RUH Type: Initially Isolated 00:08:41.717 RUH Desc #001: RUH Type: Initially Isolated 00:08:41.717 RUH Desc #002: RUH Type: Initially Isolated 00:08:41.717 RUH Desc #003: RUH Type: Initially Isolated 00:08:41.717 RUH Desc #004: RUH Type: Initially Isolated 00:08:41.717 RUH Desc #005: RUH Type: Initially Isolated 00:08:41.717 RUH Desc #006: RUH Type: Initially Isolated 00:08:41.717 RUH Desc #007: RUH Type: Initially Isolated 00:08:41.717 00:08:41.717 FDP reclaim unit handle usage log page 00:08:41.717 ====================================== 00:08:41.717 Number of Reclaim Unit Handles: 8 00:08:41.717 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:08:41.717 RUH Usage Desc #001: RUH Attributes: Unused 00:08:41.717 RUH Usage Desc #002: RUH Attributes: Unused 00:08:41.718 RUH Usage Desc #003: RUH Attributes: Unused 00:08:41.718 RUH Usage Desc #004: RUH Attributes: Unused 00:08:41.718 RUH Usage Desc #005: RUH Attributes: Unused 00:08:41.718 RUH Usage Desc #006: RUH Attributes: Unused 00:08:41.718 RUH Usage Desc #007: RUH Attributes: Unused 00:08:41.718 00:08:41.718 FDP statistics log page 00:08:41.718 ======================= 00:08:41.718 Host bytes with metadata written: 499359744 00:08:41.718 Media bytes with metadata written: 499412992 00:08:41.718 Media bytes erased: 0 00:08:41.718 00:08:41.718 FDP events log page 00:08:41.718 =================== 00:08:41.718 Number of FDP events: 0 00:08:41.718 00:08:41.718 NVM Specific Namespace Data 00:08:41.718 =========================== 00:08:41.718 Logical Block Storage Tag Mask: 0 00:08:41.718 Protection Information Capabilities: 00:08:41.718 16b Guard Protection Information Storage Tag Support: No 00:08:41.718 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:41.718 Storage Tag Check Read Support: No 00:08:41.718 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:41.718 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:41.718 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:41.718 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:41.718 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:41.718 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:41.718 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:41.718 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:41.718 ************************************ 00:08:41.718 END TEST nvme_identify 00:08:41.718 ************************************ 00:08:41.718 00:08:41.718 real 0m1.334s 00:08:41.718 user 0m0.443s 00:08:41.718 sys 0m0.644s 00:08:41.718 04:25:38 nvme.nvme_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:41.718 04:25:38 nvme.nvme_identify -- common/autotest_common.sh@10 -- # set +x 00:08:41.718 04:25:38 nvme -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:08:41.718 04:25:38 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:41.718 04:25:38 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:41.718 04:25:38 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:41.718 ************************************ 00:08:41.718 START TEST nvme_perf 00:08:41.718 ************************************ 00:08:41.718 04:25:38 nvme.nvme_perf -- common/autotest_common.sh@1129 -- # nvme_perf 00:08:41.718 04:25:38 nvme.nvme_perf -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:08:43.114 Initializing NVMe Controllers 00:08:43.114 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:43.114 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:43.114 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:43.114 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:43.114 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:08:43.114 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:08:43.114 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:08:43.114 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:08:43.114 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:08:43.114 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:08:43.114 Initialization complete. Launching workers. 00:08:43.114 ======================================================== 00:08:43.114 Latency(us) 00:08:43.114 Device Information : IOPS MiB/s Average min max 00:08:43.114 PCIE (0000:00:13.0) NSID 1 from core 0: 8883.08 104.10 14458.29 8051.49 41313.35 00:08:43.114 PCIE (0000:00:10.0) NSID 1 from core 0: 8883.08 104.10 14443.30 7740.12 40390.59 00:08:43.114 PCIE (0000:00:11.0) NSID 1 from core 0: 8883.08 104.10 14428.99 7773.39 39288.90 00:08:43.114 PCIE (0000:00:12.0) NSID 1 from core 0: 8883.08 104.10 14409.83 7797.27 39762.65 00:08:43.114 PCIE (0000:00:12.0) NSID 2 from core 0: 8883.08 104.10 14389.75 7952.92 38994.79 00:08:43.114 PCIE (0000:00:12.0) NSID 3 from core 0: 8946.98 104.85 14267.26 7934.70 28623.60 00:08:43.114 ======================================================== 00:08:43.114 Total : 53362.36 625.34 14399.41 7740.12 41313.35 00:08:43.114 00:08:43.114 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:08:43.114 ================================================================================= 00:08:43.114 1.00000% : 8318.031us 00:08:43.114 10.00000% : 9023.803us 00:08:43.114 25.00000% : 11645.243us 00:08:43.114 50.00000% : 14417.920us 00:08:43.114 75.00000% : 16837.711us 00:08:43.114 90.00000% : 18148.431us 00:08:43.114 95.00000% : 20164.923us 00:08:43.114 98.00000% : 23996.258us 00:08:43.114 99.00000% : 31255.631us 00:08:43.114 99.50000% : 40329.846us 00:08:43.114 99.90000% : 41136.443us 00:08:43.114 99.99000% : 41338.092us 00:08:43.114 99.99900% : 41338.092us 00:08:43.114 99.99990% : 41338.092us 00:08:43.114 99.99999% : 41338.092us 00:08:43.114 00:08:43.114 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:08:43.114 ================================================================================= 00:08:43.114 1.00000% : 8217.206us 00:08:43.114 10.00000% : 9074.215us 00:08:43.114 25.00000% : 11695.655us 00:08:43.114 50.00000% : 14417.920us 00:08:43.114 75.00000% : 16837.711us 00:08:43.114 90.00000% : 18249.255us 00:08:43.114 95.00000% : 19660.800us 00:08:43.114 98.00000% : 24500.382us 00:08:43.114 99.00000% : 30247.385us 00:08:43.114 99.50000% : 39321.600us 00:08:43.114 99.90000% : 40128.197us 00:08:43.114 99.99000% : 40531.495us 00:08:43.114 99.99900% : 40531.495us 00:08:43.114 99.99990% : 40531.495us 00:08:43.114 99.99999% : 40531.495us 00:08:43.114 00:08:43.114 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:08:43.114 ================================================================================= 00:08:43.114 1.00000% : 8217.206us 00:08:43.114 10.00000% : 9023.803us 00:08:43.114 25.00000% : 11594.831us 00:08:43.114 50.00000% : 14417.920us 00:08:43.114 75.00000% : 16736.886us 00:08:43.114 90.00000% : 18350.080us 00:08:43.114 95.00000% : 20366.572us 00:08:43.114 98.00000% : 22887.188us 00:08:43.114 99.00000% : 28835.840us 00:08:43.114 99.50000% : 38313.354us 00:08:43.114 99.90000% : 39119.951us 00:08:43.114 99.99000% : 39321.600us 00:08:43.114 99.99900% : 39321.600us 00:08:43.114 99.99990% : 39321.600us 00:08:43.114 99.99999% : 39321.600us 00:08:43.114 00:08:43.114 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:08:43.114 ================================================================================= 00:08:43.114 1.00000% : 8116.382us 00:08:43.114 10.00000% : 9074.215us 00:08:43.114 25.00000% : 11796.480us 00:08:43.114 50.00000% : 14417.920us 00:08:43.114 75.00000% : 16736.886us 00:08:43.114 90.00000% : 18249.255us 00:08:43.114 95.00000% : 20669.046us 00:08:43.114 98.00000% : 22988.012us 00:08:43.114 99.00000% : 28835.840us 00:08:43.114 99.50000% : 38716.652us 00:08:43.114 99.90000% : 39724.898us 00:08:43.114 99.99000% : 39926.548us 00:08:43.114 99.99900% : 39926.548us 00:08:43.114 99.99990% : 39926.548us 00:08:43.114 99.99999% : 39926.548us 00:08:43.114 00:08:43.114 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:08:43.114 ================================================================================= 00:08:43.114 1.00000% : 8267.618us 00:08:43.114 10.00000% : 9074.215us 00:08:43.114 25.00000% : 11695.655us 00:08:43.114 50.00000% : 14417.920us 00:08:43.114 75.00000% : 16837.711us 00:08:43.114 90.00000% : 18148.431us 00:08:43.114 95.00000% : 20769.871us 00:08:43.114 98.00000% : 23088.837us 00:08:43.114 99.00000% : 27424.295us 00:08:43.114 99.50000% : 37910.055us 00:08:43.114 99.90000% : 38918.302us 00:08:43.114 99.99000% : 39119.951us 00:08:43.114 99.99900% : 39119.951us 00:08:43.114 99.99990% : 39119.951us 00:08:43.114 99.99999% : 39119.951us 00:08:43.114 00:08:43.114 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:08:43.114 ================================================================================= 00:08:43.114 1.00000% : 8318.031us 00:08:43.114 10.00000% : 9074.215us 00:08:43.114 25.00000% : 11796.480us 00:08:43.114 50.00000% : 14417.920us 00:08:43.114 75.00000% : 16736.886us 00:08:43.114 90.00000% : 18148.431us 00:08:43.114 95.00000% : 20265.748us 00:08:43.114 98.00000% : 22181.415us 00:08:43.114 99.00000% : 23895.434us 00:08:43.114 99.50000% : 27625.945us 00:08:43.114 99.90000% : 28432.542us 00:08:43.114 99.99000% : 28634.191us 00:08:43.114 99.99900% : 28634.191us 00:08:43.114 99.99990% : 28634.191us 00:08:43.114 99.99999% : 28634.191us 00:08:43.114 00:08:43.114 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:08:43.114 ============================================================================== 00:08:43.115 Range in us Cumulative IO count 00:08:43.115 8015.557 - 8065.969: 0.0562% ( 5) 00:08:43.115 8065.969 - 8116.382: 0.1461% ( 8) 00:08:43.115 8116.382 - 8166.794: 0.3372% ( 17) 00:08:43.115 8166.794 - 8217.206: 0.5733% ( 21) 00:08:43.115 8217.206 - 8267.618: 0.8206% ( 22) 00:08:43.115 8267.618 - 8318.031: 1.2253% ( 36) 00:08:43.115 8318.031 - 8368.443: 1.7086% ( 43) 00:08:43.115 8368.443 - 8418.855: 2.1133% ( 36) 00:08:43.115 8418.855 - 8469.268: 2.6192% ( 45) 00:08:43.115 8469.268 - 8519.680: 3.1362% ( 46) 00:08:43.115 8519.680 - 8570.092: 3.8444% ( 63) 00:08:43.115 8570.092 - 8620.505: 4.4402% ( 53) 00:08:43.115 8620.505 - 8670.917: 5.0697% ( 56) 00:08:43.115 8670.917 - 8721.329: 5.7891% ( 64) 00:08:43.115 8721.329 - 8771.742: 6.4861% ( 62) 00:08:43.115 8771.742 - 8822.154: 7.1830% ( 62) 00:08:43.115 8822.154 - 8872.566: 7.9699% ( 70) 00:08:43.115 8872.566 - 8922.978: 8.7680% ( 71) 00:08:43.115 8922.978 - 8973.391: 9.4537% ( 61) 00:08:43.115 8973.391 - 9023.803: 10.1169% ( 59) 00:08:43.115 9023.803 - 9074.215: 10.6902% ( 51) 00:08:43.115 9074.215 - 9124.628: 11.3309% ( 57) 00:08:43.115 9124.628 - 9175.040: 11.8817% ( 49) 00:08:43.115 9175.040 - 9225.452: 12.4888% ( 54) 00:08:43.115 9225.452 - 9275.865: 12.9721% ( 43) 00:08:43.115 9275.865 - 9326.277: 13.4442% ( 42) 00:08:43.115 9326.277 - 9376.689: 13.8826% ( 39) 00:08:43.115 9376.689 - 9427.102: 14.2986% ( 37) 00:08:43.115 9427.102 - 9477.514: 14.7032% ( 36) 00:08:43.115 9477.514 - 9527.926: 15.1192% ( 37) 00:08:43.115 9527.926 - 9578.338: 15.4676% ( 31) 00:08:43.115 9578.338 - 9628.751: 15.8723% ( 36) 00:08:43.115 9628.751 - 9679.163: 16.2208% ( 31) 00:08:43.115 9679.163 - 9729.575: 16.5243% ( 27) 00:08:43.115 9729.575 - 9779.988: 16.8390% ( 28) 00:08:43.115 9779.988 - 9830.400: 17.0863% ( 22) 00:08:43.115 9830.400 - 9880.812: 17.2887% ( 18) 00:08:43.115 9880.812 - 9931.225: 17.5360% ( 22) 00:08:43.115 9931.225 - 9981.637: 17.7383% ( 18) 00:08:43.115 9981.637 - 10032.049: 17.9294% ( 17) 00:08:43.115 10032.049 - 10082.462: 18.1317% ( 18) 00:08:43.115 10082.462 - 10132.874: 18.2554% ( 11) 00:08:43.115 10132.874 - 10183.286: 18.3903% ( 12) 00:08:43.115 10183.286 - 10233.698: 18.5477% ( 14) 00:08:43.115 10233.698 - 10284.111: 18.7050% ( 14) 00:08:43.115 10284.111 - 10334.523: 18.8174% ( 10) 00:08:43.115 10334.523 - 10384.935: 18.9411% ( 11) 00:08:43.115 10384.935 - 10435.348: 19.0647% ( 11) 00:08:43.115 10435.348 - 10485.760: 19.1996% ( 12) 00:08:43.115 10485.760 - 10536.172: 19.3121% ( 10) 00:08:43.115 10536.172 - 10586.585: 19.4357% ( 11) 00:08:43.115 10586.585 - 10636.997: 19.6380% ( 18) 00:08:43.115 10636.997 - 10687.409: 19.9191% ( 25) 00:08:43.115 10687.409 - 10737.822: 20.1439% ( 20) 00:08:43.115 10737.822 - 10788.234: 20.4024% ( 23) 00:08:43.115 10788.234 - 10838.646: 20.6610% ( 23) 00:08:43.115 10838.646 - 10889.058: 20.9757% ( 28) 00:08:43.115 10889.058 - 10939.471: 21.3129% ( 30) 00:08:43.115 10939.471 - 10989.883: 21.6727% ( 32) 00:08:43.115 10989.883 - 11040.295: 21.9537% ( 25) 00:08:43.115 11040.295 - 11090.708: 22.2572% ( 27) 00:08:43.115 11090.708 - 11141.120: 22.5270% ( 24) 00:08:43.115 11141.120 - 11191.532: 22.8192% ( 26) 00:08:43.115 11191.532 - 11241.945: 23.0890% ( 24) 00:08:43.115 11241.945 - 11292.357: 23.3813% ( 26) 00:08:43.115 11292.357 - 11342.769: 23.6848% ( 27) 00:08:43.115 11342.769 - 11393.182: 23.9771% ( 26) 00:08:43.115 11393.182 - 11443.594: 24.2581% ( 25) 00:08:43.115 11443.594 - 11494.006: 24.4942% ( 21) 00:08:43.115 11494.006 - 11544.418: 24.6853% ( 17) 00:08:43.115 11544.418 - 11594.831: 24.8763% ( 17) 00:08:43.115 11594.831 - 11645.243: 25.0674% ( 17) 00:08:43.115 11645.243 - 11695.655: 25.2361% ( 15) 00:08:43.115 11695.655 - 11746.068: 25.3934% ( 14) 00:08:43.115 11746.068 - 11796.480: 25.4946% ( 9) 00:08:43.115 11796.480 - 11846.892: 25.6295% ( 12) 00:08:43.115 11846.892 - 11897.305: 25.7531% ( 11) 00:08:43.115 11897.305 - 11947.717: 25.8768% ( 11) 00:08:43.115 11947.717 - 11998.129: 25.9667% ( 8) 00:08:43.115 11998.129 - 12048.542: 26.0904% ( 11) 00:08:43.115 12048.542 - 12098.954: 26.1691% ( 7) 00:08:43.115 12098.954 - 12149.366: 26.2253% ( 5) 00:08:43.115 12149.366 - 12199.778: 26.2815% ( 5) 00:08:43.115 12199.778 - 12250.191: 26.4164% ( 12) 00:08:43.115 12250.191 - 12300.603: 26.5850% ( 15) 00:08:43.115 12300.603 - 12351.015: 26.7873% ( 18) 00:08:43.115 12351.015 - 12401.428: 27.0796% ( 26) 00:08:43.115 12401.428 - 12451.840: 27.3831% ( 27) 00:08:43.115 12451.840 - 12502.252: 27.7091% ( 29) 00:08:43.115 12502.252 - 12552.665: 28.0351% ( 29) 00:08:43.115 12552.665 - 12603.077: 28.3835% ( 31) 00:08:43.115 12603.077 - 12653.489: 28.7545% ( 33) 00:08:43.115 12653.489 - 12703.902: 29.1929% ( 39) 00:08:43.115 12703.902 - 12754.314: 29.6875% ( 44) 00:08:43.115 12754.314 - 12804.726: 30.2046% ( 46) 00:08:43.115 12804.726 - 12855.138: 30.7217% ( 46) 00:08:43.115 12855.138 - 12905.551: 31.1826% ( 41) 00:08:43.115 12905.551 - 13006.375: 32.4191% ( 110) 00:08:43.115 13006.375 - 13107.200: 33.7230% ( 116) 00:08:43.115 13107.200 - 13208.025: 35.0045% ( 114) 00:08:43.115 13208.025 - 13308.849: 36.3647% ( 121) 00:08:43.115 13308.849 - 13409.674: 37.5225% ( 103) 00:08:43.115 13409.674 - 13510.498: 39.0625% ( 137) 00:08:43.115 13510.498 - 13611.323: 40.2878% ( 109) 00:08:43.115 13611.323 - 13712.148: 41.5130% ( 109) 00:08:43.115 13712.148 - 13812.972: 42.8282% ( 117) 00:08:43.115 13812.972 - 13913.797: 44.0535% ( 109) 00:08:43.115 13913.797 - 14014.622: 45.3462% ( 115) 00:08:43.115 14014.622 - 14115.446: 46.6839% ( 119) 00:08:43.115 14115.446 - 14216.271: 47.9317% ( 111) 00:08:43.115 14216.271 - 14317.095: 49.2469% ( 117) 00:08:43.115 14317.095 - 14417.920: 50.6520% ( 125) 00:08:43.115 14417.920 - 14518.745: 51.8548% ( 107) 00:08:43.115 14518.745 - 14619.569: 53.1138% ( 112) 00:08:43.115 14619.569 - 14720.394: 54.2716% ( 103) 00:08:43.115 14720.394 - 14821.218: 55.4856% ( 108) 00:08:43.115 14821.218 - 14922.043: 56.5985% ( 99) 00:08:43.115 14922.043 - 15022.868: 57.5540% ( 85) 00:08:43.115 15022.868 - 15123.692: 58.5319% ( 87) 00:08:43.115 15123.692 - 15224.517: 59.3413% ( 72) 00:08:43.115 15224.517 - 15325.342: 60.1956% ( 76) 00:08:43.115 15325.342 - 15426.166: 60.8925% ( 62) 00:08:43.115 15426.166 - 15526.991: 61.5670% ( 60) 00:08:43.115 15526.991 - 15627.815: 62.4438% ( 78) 00:08:43.115 15627.815 - 15728.640: 63.4555% ( 90) 00:08:43.115 15728.640 - 15829.465: 64.6133% ( 103) 00:08:43.115 15829.465 - 15930.289: 65.6362% ( 91) 00:08:43.115 15930.289 - 16031.114: 66.6367% ( 89) 00:08:43.115 16031.114 - 16131.938: 67.7046% ( 95) 00:08:43.115 16131.938 - 16232.763: 68.6713% ( 86) 00:08:43.115 16232.763 - 16333.588: 69.8516% ( 105) 00:08:43.115 16333.588 - 16434.412: 71.1781% ( 118) 00:08:43.115 16434.412 - 16535.237: 72.3921% ( 108) 00:08:43.115 16535.237 - 16636.062: 73.6286% ( 110) 00:08:43.115 16636.062 - 16736.886: 74.6403% ( 90) 00:08:43.115 16736.886 - 16837.711: 75.9105% ( 113) 00:08:43.115 16837.711 - 16938.535: 77.1808% ( 113) 00:08:43.115 16938.535 - 17039.360: 78.7095% ( 136) 00:08:43.115 17039.360 - 17140.185: 79.8561% ( 102) 00:08:43.115 17140.185 - 17241.009: 81.0252% ( 104) 00:08:43.115 17241.009 - 17341.834: 82.1268% ( 98) 00:08:43.115 17341.834 - 17442.658: 83.1722% ( 93) 00:08:43.115 17442.658 - 17543.483: 84.2963% ( 100) 00:08:43.115 17543.483 - 17644.308: 85.4092% ( 99) 00:08:43.115 17644.308 - 17745.132: 86.4658% ( 94) 00:08:43.115 17745.132 - 17845.957: 87.6237% ( 103) 00:08:43.115 17845.957 - 17946.782: 89.0400% ( 126) 00:08:43.115 17946.782 - 18047.606: 89.9281% ( 79) 00:08:43.115 18047.606 - 18148.431: 90.4451% ( 46) 00:08:43.115 18148.431 - 18249.255: 91.0072% ( 50) 00:08:43.115 18249.255 - 18350.080: 91.5130% ( 45) 00:08:43.115 18350.080 - 18450.905: 91.9627% ( 40) 00:08:43.115 18450.905 - 18551.729: 92.3449% ( 34) 00:08:43.115 18551.729 - 18652.554: 92.6484% ( 27) 00:08:43.115 18652.554 - 18753.378: 92.9406% ( 26) 00:08:43.115 18753.378 - 18854.203: 93.1430% ( 18) 00:08:43.115 18854.203 - 18955.028: 93.3004% ( 14) 00:08:43.115 18955.028 - 19055.852: 93.4690% ( 15) 00:08:43.115 19055.852 - 19156.677: 93.6938% ( 20) 00:08:43.115 19156.677 - 19257.502: 93.8512% ( 14) 00:08:43.115 19257.502 - 19358.326: 93.9973% ( 13) 00:08:43.115 19358.326 - 19459.151: 94.0647% ( 6) 00:08:43.115 19459.151 - 19559.975: 94.1659% ( 9) 00:08:43.115 19559.975 - 19660.800: 94.2558% ( 8) 00:08:43.115 19660.800 - 19761.625: 94.3795% ( 11) 00:08:43.115 19761.625 - 19862.449: 94.5481% ( 15) 00:08:43.115 19862.449 - 19963.274: 94.7055% ( 14) 00:08:43.115 19963.274 - 20064.098: 94.8629% ( 14) 00:08:43.115 20064.098 - 20164.923: 95.0315% ( 15) 00:08:43.115 20164.923 - 20265.748: 95.2001% ( 15) 00:08:43.115 20265.748 - 20366.572: 95.3687% ( 15) 00:08:43.115 20366.572 - 20467.397: 95.5598% ( 17) 00:08:43.115 20467.397 - 20568.222: 95.7284% ( 15) 00:08:43.115 20568.222 - 20669.046: 95.8970% ( 15) 00:08:43.115 20669.046 - 20769.871: 96.0094% ( 10) 00:08:43.115 20769.871 - 20870.695: 96.1331% ( 11) 00:08:43.115 20870.695 - 20971.520: 96.2567% ( 11) 00:08:43.115 20971.520 - 21072.345: 96.3916% ( 12) 00:08:43.115 21072.345 - 21173.169: 96.5265% ( 12) 00:08:43.115 21173.169 - 21273.994: 96.6614% ( 12) 00:08:43.115 21273.994 - 21374.818: 96.7851% ( 11) 00:08:43.115 21374.818 - 21475.643: 96.8750% ( 8) 00:08:43.115 21475.643 - 21576.468: 96.9424% ( 6) 00:08:43.115 21576.468 - 21677.292: 97.0099% ( 6) 00:08:43.115 21677.292 - 21778.117: 97.0661% ( 5) 00:08:43.115 21778.117 - 21878.942: 97.1223% ( 5) 00:08:43.115 22080.591 - 22181.415: 97.1673% ( 4) 00:08:43.115 22181.415 - 22282.240: 97.2122% ( 4) 00:08:43.116 22282.240 - 22383.065: 97.2572% ( 4) 00:08:43.116 22383.065 - 22483.889: 97.3134% ( 5) 00:08:43.116 22483.889 - 22584.714: 97.3584% ( 4) 00:08:43.116 22584.714 - 22685.538: 97.4146% ( 5) 00:08:43.116 22685.538 - 22786.363: 97.4595% ( 4) 00:08:43.116 22786.363 - 22887.188: 97.5157% ( 5) 00:08:43.116 22887.188 - 22988.012: 97.5719% ( 5) 00:08:43.116 22988.012 - 23088.837: 97.6394% ( 6) 00:08:43.116 23088.837 - 23189.662: 97.7181% ( 7) 00:08:43.116 23189.662 - 23290.486: 97.7855% ( 6) 00:08:43.116 23290.486 - 23391.311: 97.8417% ( 5) 00:08:43.116 23592.960 - 23693.785: 97.8530% ( 1) 00:08:43.116 23693.785 - 23794.609: 97.8867% ( 3) 00:08:43.116 23794.609 - 23895.434: 97.9541% ( 6) 00:08:43.116 23895.434 - 23996.258: 98.0328% ( 7) 00:08:43.116 23996.258 - 24097.083: 98.1003% ( 6) 00:08:43.116 24097.083 - 24197.908: 98.1677% ( 6) 00:08:43.116 24197.908 - 24298.732: 98.2352% ( 6) 00:08:43.116 24298.732 - 24399.557: 98.3026% ( 6) 00:08:43.116 24399.557 - 24500.382: 98.3813% ( 7) 00:08:43.116 24500.382 - 24601.206: 98.4487% ( 6) 00:08:43.116 24601.206 - 24702.031: 98.5162% ( 6) 00:08:43.116 24702.031 - 24802.855: 98.5612% ( 4) 00:08:43.116 30247.385 - 30449.034: 98.6286% ( 6) 00:08:43.116 30449.034 - 30650.683: 98.7298% ( 9) 00:08:43.116 30650.683 - 30852.332: 98.8309% ( 9) 00:08:43.116 30852.332 - 31053.982: 98.9209% ( 8) 00:08:43.116 31053.982 - 31255.631: 99.0108% ( 8) 00:08:43.116 31255.631 - 31457.280: 99.1007% ( 8) 00:08:43.116 31457.280 - 31658.929: 99.1906% ( 8) 00:08:43.116 31658.929 - 31860.578: 99.2806% ( 8) 00:08:43.116 39724.898 - 39926.548: 99.3480% ( 6) 00:08:43.116 39926.548 - 40128.197: 99.4492% ( 9) 00:08:43.116 40128.197 - 40329.846: 99.5391% ( 8) 00:08:43.116 40329.846 - 40531.495: 99.6403% ( 9) 00:08:43.116 40531.495 - 40733.145: 99.7190% ( 7) 00:08:43.116 40733.145 - 40934.794: 99.8201% ( 9) 00:08:43.116 40934.794 - 41136.443: 99.9213% ( 9) 00:08:43.116 41136.443 - 41338.092: 100.0000% ( 7) 00:08:43.116 00:08:43.116 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:08:43.116 ============================================================================== 00:08:43.116 Range in us Cumulative IO count 00:08:43.116 7713.083 - 7763.495: 0.0225% ( 2) 00:08:43.116 7763.495 - 7813.908: 0.0674% ( 4) 00:08:43.116 7813.908 - 7864.320: 0.1012% ( 3) 00:08:43.116 7864.320 - 7914.732: 0.1574% ( 5) 00:08:43.116 7914.732 - 7965.145: 0.2136% ( 5) 00:08:43.116 7965.145 - 8015.557: 0.3260% ( 10) 00:08:43.116 8015.557 - 8065.969: 0.5058% ( 16) 00:08:43.116 8065.969 - 8116.382: 0.6969% ( 17) 00:08:43.116 8116.382 - 8166.794: 0.8880% ( 17) 00:08:43.116 8166.794 - 8217.206: 1.1466% ( 23) 00:08:43.116 8217.206 - 8267.618: 1.4613% ( 28) 00:08:43.116 8267.618 - 8318.031: 1.7986% ( 30) 00:08:43.116 8318.031 - 8368.443: 2.3156% ( 46) 00:08:43.116 8368.443 - 8418.855: 2.7203% ( 36) 00:08:43.116 8418.855 - 8469.268: 3.2037% ( 43) 00:08:43.116 8469.268 - 8519.680: 3.7433% ( 48) 00:08:43.116 8519.680 - 8570.092: 4.2828% ( 48) 00:08:43.116 8570.092 - 8620.505: 4.8336% ( 49) 00:08:43.116 8620.505 - 8670.917: 5.3282% ( 44) 00:08:43.116 8670.917 - 8721.329: 5.8790% ( 49) 00:08:43.116 8721.329 - 8771.742: 6.4861% ( 54) 00:08:43.116 8771.742 - 8822.154: 7.2055% ( 64) 00:08:43.116 8822.154 - 8872.566: 7.8013% ( 53) 00:08:43.116 8872.566 - 8922.978: 8.4420% ( 57) 00:08:43.116 8922.978 - 8973.391: 9.0827% ( 57) 00:08:43.116 8973.391 - 9023.803: 9.7122% ( 56) 00:08:43.116 9023.803 - 9074.215: 10.4204% ( 63) 00:08:43.116 9074.215 - 9124.628: 11.0724% ( 58) 00:08:43.116 9124.628 - 9175.040: 11.7693% ( 62) 00:08:43.116 9175.040 - 9225.452: 12.3988% ( 56) 00:08:43.116 9225.452 - 9275.865: 12.9721% ( 51) 00:08:43.116 9275.865 - 9326.277: 13.4892% ( 46) 00:08:43.116 9326.277 - 9376.689: 13.9051% ( 37) 00:08:43.116 9376.689 - 9427.102: 14.2873% ( 34) 00:08:43.116 9427.102 - 9477.514: 14.7370% ( 40) 00:08:43.116 9477.514 - 9527.926: 15.1192% ( 34) 00:08:43.116 9527.926 - 9578.338: 15.4789% ( 32) 00:08:43.116 9578.338 - 9628.751: 15.7711% ( 26) 00:08:43.116 9628.751 - 9679.163: 16.0634% ( 26) 00:08:43.116 9679.163 - 9729.575: 16.3669% ( 27) 00:08:43.116 9729.575 - 9779.988: 16.6030% ( 21) 00:08:43.116 9779.988 - 9830.400: 16.8728% ( 24) 00:08:43.116 9830.400 - 9880.812: 17.1201% ( 22) 00:08:43.116 9880.812 - 9931.225: 17.3224% ( 18) 00:08:43.116 9931.225 - 9981.637: 17.5809% ( 23) 00:08:43.116 9981.637 - 10032.049: 17.7833% ( 18) 00:08:43.116 10032.049 - 10082.462: 18.0306% ( 22) 00:08:43.116 10082.462 - 10132.874: 18.1655% ( 12) 00:08:43.116 10132.874 - 10183.286: 18.3790% ( 19) 00:08:43.116 10183.286 - 10233.698: 18.5589% ( 16) 00:08:43.116 10233.698 - 10284.111: 18.7500% ( 17) 00:08:43.116 10284.111 - 10334.523: 18.8512% ( 9) 00:08:43.116 10334.523 - 10384.935: 19.0085% ( 14) 00:08:43.116 10384.935 - 10435.348: 19.1772% ( 15) 00:08:43.116 10435.348 - 10485.760: 19.3570% ( 16) 00:08:43.116 10485.760 - 10536.172: 19.5594% ( 18) 00:08:43.116 10536.172 - 10586.585: 19.8179% ( 23) 00:08:43.116 10586.585 - 10636.997: 20.0877% ( 24) 00:08:43.116 10636.997 - 10687.409: 20.3350% ( 22) 00:08:43.116 10687.409 - 10737.822: 20.5598% ( 20) 00:08:43.116 10737.822 - 10788.234: 20.8633% ( 27) 00:08:43.116 10788.234 - 10838.646: 21.2230% ( 32) 00:08:43.116 10838.646 - 10889.058: 21.5603% ( 30) 00:08:43.116 10889.058 - 10939.471: 21.8413% ( 25) 00:08:43.116 10939.471 - 10989.883: 22.0773% ( 21) 00:08:43.116 10989.883 - 11040.295: 22.3584% ( 25) 00:08:43.116 11040.295 - 11090.708: 22.6394% ( 25) 00:08:43.116 11090.708 - 11141.120: 22.9092% ( 24) 00:08:43.116 11141.120 - 11191.532: 23.1565% ( 22) 00:08:43.116 11191.532 - 11241.945: 23.4150% ( 23) 00:08:43.116 11241.945 - 11292.357: 23.6848% ( 24) 00:08:43.116 11292.357 - 11342.769: 23.8647% ( 16) 00:08:43.116 11342.769 - 11393.182: 24.0108% ( 13) 00:08:43.116 11393.182 - 11443.594: 24.2469% ( 21) 00:08:43.116 11443.594 - 11494.006: 24.4492% ( 18) 00:08:43.116 11494.006 - 11544.418: 24.6066% ( 14) 00:08:43.116 11544.418 - 11594.831: 24.8201% ( 19) 00:08:43.116 11594.831 - 11645.243: 24.9888% ( 15) 00:08:43.116 11645.243 - 11695.655: 25.1574% ( 15) 00:08:43.116 11695.655 - 11746.068: 25.3260% ( 15) 00:08:43.116 11746.068 - 11796.480: 25.4946% ( 15) 00:08:43.116 11796.480 - 11846.892: 25.5958% ( 9) 00:08:43.116 11846.892 - 11897.305: 25.6295% ( 3) 00:08:43.116 11897.305 - 11947.717: 25.6857% ( 5) 00:08:43.116 11947.717 - 11998.129: 25.8543% ( 15) 00:08:43.116 11998.129 - 12048.542: 25.8993% ( 4) 00:08:43.116 12048.542 - 12098.954: 26.0567% ( 14) 00:08:43.116 12098.954 - 12149.366: 26.1915% ( 12) 00:08:43.116 12149.366 - 12199.778: 26.4838% ( 26) 00:08:43.116 12199.778 - 12250.191: 26.6187% ( 12) 00:08:43.116 12250.191 - 12300.603: 26.8997% ( 25) 00:08:43.116 12300.603 - 12351.015: 27.2932% ( 35) 00:08:43.116 12351.015 - 12401.428: 27.5854% ( 26) 00:08:43.116 12401.428 - 12451.840: 27.8665% ( 25) 00:08:43.116 12451.840 - 12502.252: 28.1924% ( 29) 00:08:43.116 12502.252 - 12552.665: 28.5297% ( 30) 00:08:43.116 12552.665 - 12603.077: 28.8894% ( 32) 00:08:43.116 12603.077 - 12653.489: 29.1929% ( 27) 00:08:43.116 12653.489 - 12703.902: 29.5976% ( 36) 00:08:43.116 12703.902 - 12754.314: 30.1821% ( 52) 00:08:43.116 12754.314 - 12804.726: 30.4969% ( 28) 00:08:43.116 12804.726 - 12855.138: 31.0926% ( 53) 00:08:43.116 12855.138 - 12905.551: 31.7221% ( 56) 00:08:43.116 12905.551 - 13006.375: 32.7226% ( 89) 00:08:43.116 13006.375 - 13107.200: 33.8579% ( 101) 00:08:43.116 13107.200 - 13208.025: 35.0495% ( 106) 00:08:43.116 13208.025 - 13308.849: 36.2185% ( 104) 00:08:43.116 13308.849 - 13409.674: 37.4213% ( 107) 00:08:43.116 13409.674 - 13510.498: 38.9613% ( 137) 00:08:43.116 13510.498 - 13611.323: 40.2316% ( 113) 00:08:43.116 13611.323 - 13712.148: 41.5243% ( 115) 00:08:43.116 13712.148 - 13812.972: 43.0531% ( 136) 00:08:43.116 13812.972 - 13913.797: 44.3458% ( 115) 00:08:43.116 13913.797 - 14014.622: 45.6610% ( 117) 00:08:43.116 14014.622 - 14115.446: 46.6727% ( 90) 00:08:43.116 14115.446 - 14216.271: 48.0328% ( 121) 00:08:43.116 14216.271 - 14317.095: 49.3480% ( 117) 00:08:43.116 14317.095 - 14417.920: 50.4834% ( 101) 00:08:43.116 14417.920 - 14518.745: 51.6637% ( 105) 00:08:43.116 14518.745 - 14619.569: 52.8777% ( 108) 00:08:43.116 14619.569 - 14720.394: 53.9681% ( 97) 00:08:43.116 14720.394 - 14821.218: 54.8898% ( 82) 00:08:43.116 14821.218 - 14922.043: 55.6879% ( 71) 00:08:43.116 14922.043 - 15022.868: 56.7558% ( 95) 00:08:43.116 15022.868 - 15123.692: 57.6888% ( 83) 00:08:43.116 15123.692 - 15224.517: 58.7343% ( 93) 00:08:43.116 15224.517 - 15325.342: 59.8022% ( 95) 00:08:43.116 15325.342 - 15426.166: 60.8813% ( 96) 00:08:43.116 15426.166 - 15526.991: 61.8368% ( 85) 00:08:43.116 15526.991 - 15627.815: 62.9496% ( 99) 00:08:43.116 15627.815 - 15728.640: 63.8489% ( 80) 00:08:43.116 15728.640 - 15829.465: 64.8044% ( 85) 00:08:43.116 15829.465 - 15930.289: 65.6812% ( 78) 00:08:43.116 15930.289 - 16031.114: 66.8728% ( 106) 00:08:43.116 16031.114 - 16131.938: 67.9969% ( 100) 00:08:43.116 16131.938 - 16232.763: 69.1434% ( 102) 00:08:43.116 16232.763 - 16333.588: 70.3799% ( 110) 00:08:43.116 16333.588 - 16434.412: 71.1106% ( 65) 00:08:43.116 16434.412 - 16535.237: 72.2122% ( 98) 00:08:43.116 16535.237 - 16636.062: 73.0778% ( 77) 00:08:43.116 16636.062 - 16736.886: 74.1120% ( 92) 00:08:43.116 16736.886 - 16837.711: 75.4047% ( 115) 00:08:43.116 16837.711 - 16938.535: 76.7536% ( 120) 00:08:43.116 16938.535 - 17039.360: 77.9564% ( 107) 00:08:43.116 17039.360 - 17140.185: 79.1817% ( 109) 00:08:43.116 17140.185 - 17241.009: 80.7442% ( 139) 00:08:43.117 17241.009 - 17341.834: 81.9582% ( 108) 00:08:43.117 17341.834 - 17442.658: 83.0036% ( 93) 00:08:43.117 17442.658 - 17543.483: 84.0040% ( 89) 00:08:43.117 17543.483 - 17644.308: 85.4092% ( 125) 00:08:43.117 17644.308 - 17745.132: 86.2972% ( 79) 00:08:43.117 17745.132 - 17845.957: 87.2977% ( 89) 00:08:43.117 17845.957 - 17946.782: 88.2194% ( 82) 00:08:43.117 17946.782 - 18047.606: 89.0513% ( 74) 00:08:43.117 18047.606 - 18148.431: 89.8719% ( 73) 00:08:43.117 18148.431 - 18249.255: 90.5688% ( 62) 00:08:43.117 18249.255 - 18350.080: 91.3219% ( 67) 00:08:43.117 18350.080 - 18450.905: 91.7716% ( 40) 00:08:43.117 18450.905 - 18551.729: 92.2437% ( 42) 00:08:43.117 18551.729 - 18652.554: 92.7158% ( 42) 00:08:43.117 18652.554 - 18753.378: 93.1992% ( 43) 00:08:43.117 18753.378 - 18854.203: 93.5027% ( 27) 00:08:43.117 18854.203 - 18955.028: 93.8624% ( 32) 00:08:43.117 18955.028 - 19055.852: 94.1097% ( 22) 00:08:43.117 19055.852 - 19156.677: 94.3907% ( 25) 00:08:43.117 19156.677 - 19257.502: 94.5594% ( 15) 00:08:43.117 19257.502 - 19358.326: 94.6718% ( 10) 00:08:43.117 19358.326 - 19459.151: 94.8629% ( 17) 00:08:43.117 19459.151 - 19559.975: 94.9528% ( 8) 00:08:43.117 19559.975 - 19660.800: 95.1439% ( 17) 00:08:43.117 19660.800 - 19761.625: 95.2338% ( 8) 00:08:43.117 19761.625 - 19862.449: 95.3125% ( 7) 00:08:43.117 19862.449 - 19963.274: 95.3687% ( 5) 00:08:43.117 19963.274 - 20064.098: 95.4024% ( 3) 00:08:43.117 20064.098 - 20164.923: 95.4362% ( 3) 00:08:43.117 20164.923 - 20265.748: 95.4699% ( 3) 00:08:43.117 20265.748 - 20366.572: 95.5036% ( 3) 00:08:43.117 20366.572 - 20467.397: 95.5823% ( 7) 00:08:43.117 20467.397 - 20568.222: 95.6160% ( 3) 00:08:43.117 20568.222 - 20669.046: 95.6610% ( 4) 00:08:43.117 20669.046 - 20769.871: 95.7172% ( 5) 00:08:43.117 20769.871 - 20870.695: 95.8296% ( 10) 00:08:43.117 20870.695 - 20971.520: 95.9308% ( 9) 00:08:43.117 20971.520 - 21072.345: 96.0769% ( 13) 00:08:43.117 21072.345 - 21173.169: 96.2118% ( 12) 00:08:43.117 21173.169 - 21273.994: 96.2680% ( 5) 00:08:43.117 21273.994 - 21374.818: 96.3692% ( 9) 00:08:43.117 21374.818 - 21475.643: 96.4703% ( 9) 00:08:43.117 21475.643 - 21576.468: 96.6389% ( 15) 00:08:43.117 21576.468 - 21677.292: 96.7064% ( 6) 00:08:43.117 21677.292 - 21778.117: 96.7626% ( 5) 00:08:43.117 21778.117 - 21878.942: 97.0211% ( 23) 00:08:43.117 21878.942 - 21979.766: 97.0661% ( 4) 00:08:43.117 21979.766 - 22080.591: 97.1673% ( 9) 00:08:43.117 22080.591 - 22181.415: 97.2010% ( 3) 00:08:43.117 22181.415 - 22282.240: 97.2909% ( 8) 00:08:43.117 22282.240 - 22383.065: 97.3471% ( 5) 00:08:43.117 22383.065 - 22483.889: 97.3921% ( 4) 00:08:43.117 22483.889 - 22584.714: 97.4708% ( 7) 00:08:43.117 22584.714 - 22685.538: 97.5045% ( 3) 00:08:43.117 22685.538 - 22786.363: 97.6619% ( 14) 00:08:43.117 22887.188 - 22988.012: 97.7068% ( 4) 00:08:43.117 22988.012 - 23088.837: 97.7743% ( 6) 00:08:43.117 23088.837 - 23189.662: 97.8417% ( 6) 00:08:43.117 23996.258 - 24097.083: 97.8979% ( 5) 00:08:43.117 24097.083 - 24197.908: 97.9429% ( 4) 00:08:43.117 24197.908 - 24298.732: 97.9541% ( 1) 00:08:43.117 24298.732 - 24399.557: 97.9991% ( 4) 00:08:43.117 24399.557 - 24500.382: 98.0216% ( 2) 00:08:43.117 24500.382 - 24601.206: 98.0553% ( 3) 00:08:43.117 24601.206 - 24702.031: 98.0778% ( 2) 00:08:43.117 24702.031 - 24802.855: 98.1228% ( 4) 00:08:43.117 24802.855 - 24903.680: 98.1677% ( 4) 00:08:43.117 24903.680 - 25004.505: 98.2014% ( 3) 00:08:43.117 25004.505 - 25105.329: 98.2127% ( 1) 00:08:43.117 25105.329 - 25206.154: 98.2352% ( 2) 00:08:43.117 25206.154 - 25306.978: 98.2914% ( 5) 00:08:43.117 25407.803 - 25508.628: 98.3251% ( 3) 00:08:43.117 25508.628 - 25609.452: 98.3925% ( 6) 00:08:43.117 25609.452 - 25710.277: 98.4150% ( 2) 00:08:43.117 25811.102 - 26012.751: 98.4825% ( 6) 00:08:43.117 26012.751 - 26214.400: 98.5499% ( 6) 00:08:43.117 26214.400 - 26416.049: 98.5612% ( 1) 00:08:43.117 28835.840 - 29037.489: 98.5724% ( 1) 00:08:43.117 29037.489 - 29239.138: 98.6398% ( 6) 00:08:43.117 29239.138 - 29440.788: 98.7635% ( 11) 00:08:43.117 29440.788 - 29642.437: 98.8197% ( 5) 00:08:43.117 29642.437 - 29844.086: 98.8984% ( 7) 00:08:43.117 29844.086 - 30045.735: 98.9883% ( 8) 00:08:43.117 30045.735 - 30247.385: 99.0670% ( 7) 00:08:43.117 30247.385 - 30449.034: 99.1232% ( 5) 00:08:43.117 30449.034 - 30650.683: 99.2469% ( 11) 00:08:43.117 30650.683 - 30852.332: 99.2806% ( 3) 00:08:43.117 38515.003 - 38716.652: 99.3255% ( 4) 00:08:43.117 38716.652 - 38918.302: 99.3705% ( 4) 00:08:43.117 38918.302 - 39119.951: 99.4717% ( 9) 00:08:43.117 39119.951 - 39321.600: 99.5504% ( 7) 00:08:43.117 39321.600 - 39523.249: 99.6403% ( 8) 00:08:43.117 39523.249 - 39724.898: 99.6965% ( 5) 00:08:43.117 39724.898 - 39926.548: 99.7864% ( 8) 00:08:43.117 39926.548 - 40128.197: 99.9101% ( 11) 00:08:43.117 40128.197 - 40329.846: 99.9775% ( 6) 00:08:43.117 40329.846 - 40531.495: 100.0000% ( 2) 00:08:43.117 00:08:43.117 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:08:43.117 ============================================================================== 00:08:43.117 Range in us Cumulative IO count 00:08:43.117 7763.495 - 7813.908: 0.0787% ( 7) 00:08:43.117 7813.908 - 7864.320: 0.1012% ( 2) 00:08:43.117 7864.320 - 7914.732: 0.1461% ( 4) 00:08:43.117 7914.732 - 7965.145: 0.1911% ( 4) 00:08:43.117 7965.145 - 8015.557: 0.2923% ( 9) 00:08:43.117 8015.557 - 8065.969: 0.4047% ( 10) 00:08:43.117 8065.969 - 8116.382: 0.6407% ( 21) 00:08:43.117 8116.382 - 8166.794: 0.8656% ( 20) 00:08:43.117 8166.794 - 8217.206: 1.1016% ( 21) 00:08:43.117 8217.206 - 8267.618: 1.3602% ( 23) 00:08:43.117 8267.618 - 8318.031: 1.6299% ( 24) 00:08:43.117 8318.031 - 8368.443: 1.9110% ( 25) 00:08:43.117 8368.443 - 8418.855: 2.2594% ( 31) 00:08:43.117 8418.855 - 8469.268: 2.7653% ( 45) 00:08:43.117 8469.268 - 8519.680: 3.2262% ( 41) 00:08:43.117 8519.680 - 8570.092: 3.7545% ( 47) 00:08:43.117 8570.092 - 8620.505: 4.2716% ( 46) 00:08:43.117 8620.505 - 8670.917: 4.8674% ( 53) 00:08:43.117 8670.917 - 8721.329: 5.6093% ( 66) 00:08:43.117 8721.329 - 8771.742: 6.3062% ( 62) 00:08:43.117 8771.742 - 8822.154: 6.9807% ( 60) 00:08:43.117 8822.154 - 8872.566: 7.7001% ( 64) 00:08:43.117 8872.566 - 8922.978: 8.5544% ( 76) 00:08:43.117 8922.978 - 8973.391: 9.3750% ( 73) 00:08:43.117 8973.391 - 9023.803: 10.0270% ( 58) 00:08:43.117 9023.803 - 9074.215: 10.6677% ( 57) 00:08:43.117 9074.215 - 9124.628: 11.3984% ( 65) 00:08:43.117 9124.628 - 9175.040: 12.1066% ( 63) 00:08:43.117 9175.040 - 9225.452: 12.7473% ( 57) 00:08:43.117 9225.452 - 9275.865: 13.3768% ( 56) 00:08:43.117 9275.865 - 9326.277: 13.9726% ( 53) 00:08:43.117 9326.277 - 9376.689: 14.6021% ( 56) 00:08:43.117 9376.689 - 9427.102: 15.1416% ( 48) 00:08:43.117 9427.102 - 9477.514: 15.5463% ( 36) 00:08:43.117 9477.514 - 9527.926: 15.9510% ( 36) 00:08:43.117 9527.926 - 9578.338: 16.3107% ( 32) 00:08:43.117 9578.338 - 9628.751: 16.6592% ( 31) 00:08:43.117 9628.751 - 9679.163: 16.9402% ( 25) 00:08:43.117 9679.163 - 9729.575: 17.2662% ( 29) 00:08:43.117 9729.575 - 9779.988: 17.4798% ( 19) 00:08:43.117 9779.988 - 9830.400: 17.6596% ( 16) 00:08:43.117 9830.400 - 9880.812: 17.7945% ( 12) 00:08:43.117 9880.812 - 9931.225: 17.9182% ( 11) 00:08:43.117 9931.225 - 9981.637: 18.0755% ( 14) 00:08:43.117 9981.637 - 10032.049: 18.2104% ( 12) 00:08:43.117 10032.049 - 10082.462: 18.3678% ( 14) 00:08:43.117 10082.462 - 10132.874: 18.4802% ( 10) 00:08:43.117 10132.874 - 10183.286: 18.5701% ( 8) 00:08:43.117 10183.286 - 10233.698: 18.6376% ( 6) 00:08:43.117 10233.698 - 10284.111: 18.7275% ( 8) 00:08:43.117 10284.111 - 10334.523: 18.8174% ( 8) 00:08:43.117 10334.523 - 10384.935: 18.9074% ( 8) 00:08:43.117 10384.935 - 10435.348: 19.0198% ( 10) 00:08:43.117 10435.348 - 10485.760: 19.1434% ( 11) 00:08:43.117 10485.760 - 10536.172: 19.2558% ( 10) 00:08:43.117 10536.172 - 10586.585: 19.3570% ( 9) 00:08:43.117 10586.585 - 10636.997: 19.5594% ( 18) 00:08:43.117 10636.997 - 10687.409: 19.7617% ( 18) 00:08:43.117 10687.409 - 10737.822: 19.9978% ( 21) 00:08:43.117 10737.822 - 10788.234: 20.3350% ( 30) 00:08:43.117 10788.234 - 10838.646: 20.7172% ( 34) 00:08:43.117 10838.646 - 10889.058: 21.0881% ( 33) 00:08:43.117 10889.058 - 10939.471: 21.3916% ( 27) 00:08:43.117 10939.471 - 10989.883: 21.7513% ( 32) 00:08:43.117 10989.883 - 11040.295: 22.0773% ( 29) 00:08:43.117 11040.295 - 11090.708: 22.4371% ( 32) 00:08:43.117 11090.708 - 11141.120: 22.8305% ( 35) 00:08:43.117 11141.120 - 11191.532: 23.1115% ( 25) 00:08:43.117 11191.532 - 11241.945: 23.3925% ( 25) 00:08:43.117 11241.945 - 11292.357: 23.6960% ( 27) 00:08:43.117 11292.357 - 11342.769: 23.9883% ( 26) 00:08:43.117 11342.769 - 11393.182: 24.2918% ( 27) 00:08:43.117 11393.182 - 11443.594: 24.6066% ( 28) 00:08:43.117 11443.594 - 11494.006: 24.8201% ( 19) 00:08:43.117 11494.006 - 11544.418: 24.9775% ( 14) 00:08:43.117 11544.418 - 11594.831: 25.0787% ( 9) 00:08:43.117 11594.831 - 11645.243: 25.1911% ( 10) 00:08:43.117 11645.243 - 11695.655: 25.3147% ( 11) 00:08:43.117 11695.655 - 11746.068: 25.4159% ( 9) 00:08:43.117 11746.068 - 11796.480: 25.5396% ( 11) 00:08:43.117 11796.480 - 11846.892: 25.6183% ( 7) 00:08:43.117 11846.892 - 11897.305: 25.7307% ( 10) 00:08:43.117 11897.305 - 11947.717: 25.8543% ( 11) 00:08:43.117 11947.717 - 11998.129: 25.9330% ( 7) 00:08:43.117 11998.129 - 12048.542: 26.0004% ( 6) 00:08:43.117 12048.542 - 12098.954: 26.0567% ( 5) 00:08:43.117 12098.954 - 12149.366: 26.1241% ( 6) 00:08:43.118 12149.366 - 12199.778: 26.1803% ( 5) 00:08:43.118 12199.778 - 12250.191: 26.3040% ( 11) 00:08:43.118 12250.191 - 12300.603: 26.4388% ( 12) 00:08:43.118 12300.603 - 12351.015: 26.6862% ( 22) 00:08:43.118 12351.015 - 12401.428: 26.9672% ( 25) 00:08:43.118 12401.428 - 12451.840: 27.2819% ( 28) 00:08:43.118 12451.840 - 12502.252: 27.5742% ( 26) 00:08:43.118 12502.252 - 12552.665: 27.8440% ( 24) 00:08:43.118 12552.665 - 12603.077: 28.1250% ( 25) 00:08:43.118 12603.077 - 12653.489: 28.4622% ( 30) 00:08:43.118 12653.489 - 12703.902: 28.8219% ( 32) 00:08:43.118 12703.902 - 12754.314: 29.1929% ( 33) 00:08:43.118 12754.314 - 12804.726: 29.6425% ( 40) 00:08:43.118 12804.726 - 12855.138: 30.1371% ( 44) 00:08:43.118 12855.138 - 12905.551: 30.8004% ( 59) 00:08:43.118 12905.551 - 13006.375: 32.0031% ( 107) 00:08:43.118 13006.375 - 13107.200: 33.2509% ( 111) 00:08:43.118 13107.200 - 13208.025: 34.7460% ( 133) 00:08:43.118 13208.025 - 13308.849: 36.1398% ( 124) 00:08:43.118 13308.849 - 13409.674: 37.3876% ( 111) 00:08:43.118 13409.674 - 13510.498: 38.6016% ( 108) 00:08:43.118 13510.498 - 13611.323: 39.7594% ( 103) 00:08:43.118 13611.323 - 13712.148: 41.1084% ( 120) 00:08:43.118 13712.148 - 13812.972: 42.3449% ( 110) 00:08:43.118 13812.972 - 13913.797: 43.5139% ( 104) 00:08:43.118 13913.797 - 14014.622: 44.8741% ( 121) 00:08:43.118 14014.622 - 14115.446: 46.2343% ( 121) 00:08:43.118 14115.446 - 14216.271: 47.5832% ( 120) 00:08:43.118 14216.271 - 14317.095: 48.8647% ( 114) 00:08:43.118 14317.095 - 14417.920: 50.0674% ( 107) 00:08:43.118 14417.920 - 14518.745: 51.2478% ( 105) 00:08:43.118 14518.745 - 14619.569: 52.3606% ( 99) 00:08:43.118 14619.569 - 14720.394: 53.4735% ( 99) 00:08:43.118 14720.394 - 14821.218: 54.5076% ( 92) 00:08:43.118 14821.218 - 14922.043: 55.5531% ( 93) 00:08:43.118 14922.043 - 15022.868: 56.5647% ( 90) 00:08:43.118 15022.868 - 15123.692: 57.6214% ( 94) 00:08:43.118 15123.692 - 15224.517: 58.6331% ( 90) 00:08:43.118 15224.517 - 15325.342: 59.6111% ( 87) 00:08:43.118 15325.342 - 15426.166: 60.5553% ( 84) 00:08:43.118 15426.166 - 15526.991: 61.6007% ( 93) 00:08:43.118 15526.991 - 15627.815: 62.8710% ( 113) 00:08:43.118 15627.815 - 15728.640: 63.9726% ( 98) 00:08:43.118 15728.640 - 15829.465: 65.0517% ( 96) 00:08:43.118 15829.465 - 15930.289: 66.2545% ( 107) 00:08:43.118 15930.289 - 16031.114: 67.4685% ( 108) 00:08:43.118 16031.114 - 16131.938: 68.4802% ( 90) 00:08:43.118 16131.938 - 16232.763: 69.6380% ( 103) 00:08:43.118 16232.763 - 16333.588: 70.7284% ( 97) 00:08:43.118 16333.588 - 16434.412: 71.7176% ( 88) 00:08:43.118 16434.412 - 16535.237: 72.7855% ( 95) 00:08:43.118 16535.237 - 16636.062: 73.8647% ( 96) 00:08:43.118 16636.062 - 16736.886: 75.1124% ( 111) 00:08:43.118 16736.886 - 16837.711: 76.1803% ( 95) 00:08:43.118 16837.711 - 16938.535: 77.3381% ( 103) 00:08:43.118 16938.535 - 17039.360: 78.5522% ( 108) 00:08:43.118 17039.360 - 17140.185: 79.7100% ( 103) 00:08:43.118 17140.185 - 17241.009: 80.7442% ( 92) 00:08:43.118 17241.009 - 17341.834: 81.6996% ( 85) 00:08:43.118 17341.834 - 17442.658: 82.8237% ( 100) 00:08:43.118 17442.658 - 17543.483: 83.9816% ( 103) 00:08:43.118 17543.483 - 17644.308: 85.1506% ( 104) 00:08:43.118 17644.308 - 17745.132: 86.1960% ( 93) 00:08:43.118 17745.132 - 17845.957: 87.1740% ( 87) 00:08:43.118 17845.957 - 17946.782: 88.0508% ( 78) 00:08:43.118 17946.782 - 18047.606: 88.7590% ( 63) 00:08:43.118 18047.606 - 18148.431: 89.3772% ( 55) 00:08:43.118 18148.431 - 18249.255: 89.9281% ( 49) 00:08:43.118 18249.255 - 18350.080: 90.5238% ( 53) 00:08:43.118 18350.080 - 18450.905: 91.1421% ( 55) 00:08:43.118 18450.905 - 18551.729: 91.6367% ( 44) 00:08:43.118 18551.729 - 18652.554: 92.1201% ( 43) 00:08:43.118 18652.554 - 18753.378: 92.6147% ( 44) 00:08:43.118 18753.378 - 18854.203: 92.9406% ( 29) 00:08:43.118 18854.203 - 18955.028: 93.1879% ( 22) 00:08:43.118 18955.028 - 19055.852: 93.4465% ( 23) 00:08:43.118 19055.852 - 19156.677: 93.7275% ( 25) 00:08:43.118 19156.677 - 19257.502: 93.9748% ( 22) 00:08:43.118 19257.502 - 19358.326: 94.1884% ( 19) 00:08:43.118 19358.326 - 19459.151: 94.3683% ( 16) 00:08:43.118 19459.151 - 19559.975: 94.5369% ( 15) 00:08:43.118 19559.975 - 19660.800: 94.6605% ( 11) 00:08:43.118 19660.800 - 19761.625: 94.7617% ( 9) 00:08:43.118 19761.625 - 19862.449: 94.7842% ( 2) 00:08:43.118 19862.449 - 19963.274: 94.8179% ( 3) 00:08:43.118 19963.274 - 20064.098: 94.8741% ( 5) 00:08:43.118 20064.098 - 20164.923: 94.9191% ( 4) 00:08:43.118 20164.923 - 20265.748: 94.9978% ( 7) 00:08:43.118 20265.748 - 20366.572: 95.1214% ( 11) 00:08:43.118 20366.572 - 20467.397: 95.2675% ( 13) 00:08:43.118 20467.397 - 20568.222: 95.3687% ( 9) 00:08:43.118 20568.222 - 20669.046: 95.4474% ( 7) 00:08:43.118 20669.046 - 20769.871: 95.5261% ( 7) 00:08:43.118 20769.871 - 20870.695: 95.6272% ( 9) 00:08:43.118 20870.695 - 20971.520: 95.7621% ( 12) 00:08:43.118 20971.520 - 21072.345: 95.8970% ( 12) 00:08:43.118 21072.345 - 21173.169: 96.0544% ( 14) 00:08:43.118 21173.169 - 21273.994: 96.2118% ( 14) 00:08:43.118 21273.994 - 21374.818: 96.3804% ( 15) 00:08:43.118 21374.818 - 21475.643: 96.5153% ( 12) 00:08:43.118 21475.643 - 21576.468: 96.6614% ( 13) 00:08:43.118 21576.468 - 21677.292: 96.7963% ( 12) 00:08:43.118 21677.292 - 21778.117: 96.8975% ( 9) 00:08:43.118 21778.117 - 21878.942: 96.9649% ( 6) 00:08:43.118 21878.942 - 21979.766: 97.0436% ( 7) 00:08:43.118 21979.766 - 22080.591: 97.0998% ( 5) 00:08:43.118 22080.591 - 22181.415: 97.1673% ( 6) 00:08:43.118 22181.415 - 22282.240: 97.2347% ( 6) 00:08:43.118 22282.240 - 22383.065: 97.3134% ( 7) 00:08:43.118 22383.065 - 22483.889: 97.4371% ( 11) 00:08:43.118 22483.889 - 22584.714: 97.6394% ( 18) 00:08:43.118 22584.714 - 22685.538: 97.8192% ( 16) 00:08:43.118 22685.538 - 22786.363: 97.9654% ( 13) 00:08:43.118 22786.363 - 22887.188: 98.1003% ( 12) 00:08:43.118 22887.188 - 22988.012: 98.2352% ( 12) 00:08:43.118 22988.012 - 23088.837: 98.3588% ( 11) 00:08:43.118 23088.837 - 23189.662: 98.4375% ( 7) 00:08:43.118 23189.662 - 23290.486: 98.5162% ( 7) 00:08:43.118 23290.486 - 23391.311: 98.5612% ( 4) 00:08:43.118 27625.945 - 27827.594: 98.6174% ( 5) 00:08:43.118 27827.594 - 28029.243: 98.6960% ( 7) 00:08:43.118 28029.243 - 28230.892: 98.7972% ( 9) 00:08:43.118 28230.892 - 28432.542: 98.8759% ( 7) 00:08:43.118 28432.542 - 28634.191: 98.9771% ( 9) 00:08:43.118 28634.191 - 28835.840: 99.0558% ( 7) 00:08:43.118 28835.840 - 29037.489: 99.1569% ( 9) 00:08:43.118 29037.489 - 29239.138: 99.2469% ( 8) 00:08:43.118 29239.138 - 29440.788: 99.2806% ( 3) 00:08:43.118 37506.757 - 37708.406: 99.3031% ( 2) 00:08:43.118 37708.406 - 37910.055: 99.3930% ( 8) 00:08:43.118 37910.055 - 38111.705: 99.4604% ( 6) 00:08:43.118 38111.705 - 38313.354: 99.5616% ( 9) 00:08:43.118 38313.354 - 38515.003: 99.6403% ( 7) 00:08:43.118 38515.003 - 38716.652: 99.7415% ( 9) 00:08:43.118 38716.652 - 38918.302: 99.8314% ( 8) 00:08:43.118 38918.302 - 39119.951: 99.9213% ( 8) 00:08:43.118 39119.951 - 39321.600: 100.0000% ( 7) 00:08:43.118 00:08:43.118 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:08:43.118 ============================================================================== 00:08:43.118 Range in us Cumulative IO count 00:08:43.118 7763.495 - 7813.908: 0.0337% ( 3) 00:08:43.118 7813.908 - 7864.320: 0.0899% ( 5) 00:08:43.118 7864.320 - 7914.732: 0.1911% ( 9) 00:08:43.118 7914.732 - 7965.145: 0.3260% ( 12) 00:08:43.118 7965.145 - 8015.557: 0.4721% ( 13) 00:08:43.118 8015.557 - 8065.969: 0.7307% ( 23) 00:08:43.118 8065.969 - 8116.382: 1.0004% ( 24) 00:08:43.118 8116.382 - 8166.794: 1.2478% ( 22) 00:08:43.118 8166.794 - 8217.206: 1.5850% ( 30) 00:08:43.118 8217.206 - 8267.618: 1.9110% ( 29) 00:08:43.118 8267.618 - 8318.031: 2.3494% ( 39) 00:08:43.118 8318.031 - 8368.443: 2.7765% ( 38) 00:08:43.118 8368.443 - 8418.855: 3.1700% ( 35) 00:08:43.118 8418.855 - 8469.268: 3.6533% ( 43) 00:08:43.118 8469.268 - 8519.680: 4.1592% ( 45) 00:08:43.118 8519.680 - 8570.092: 4.6763% ( 46) 00:08:43.118 8570.092 - 8620.505: 5.2383% ( 50) 00:08:43.118 8620.505 - 8670.917: 5.8566% ( 55) 00:08:43.118 8670.917 - 8721.329: 6.4299% ( 51) 00:08:43.118 8721.329 - 8771.742: 7.0144% ( 52) 00:08:43.118 8771.742 - 8822.154: 7.6326% ( 55) 00:08:43.118 8822.154 - 8872.566: 8.2397% ( 54) 00:08:43.118 8872.566 - 8922.978: 8.7680% ( 47) 00:08:43.118 8922.978 - 8973.391: 9.2626% ( 44) 00:08:43.118 8973.391 - 9023.803: 9.7797% ( 46) 00:08:43.118 9023.803 - 9074.215: 10.2181% ( 39) 00:08:43.118 9074.215 - 9124.628: 10.6340% ( 37) 00:08:43.118 9124.628 - 9175.040: 11.0499% ( 37) 00:08:43.118 9175.040 - 9225.452: 11.6007% ( 49) 00:08:43.118 9225.452 - 9275.865: 12.1628% ( 50) 00:08:43.118 9275.865 - 9326.277: 12.6237% ( 41) 00:08:43.118 9326.277 - 9376.689: 13.1520% ( 47) 00:08:43.118 9376.689 - 9427.102: 13.6129% ( 41) 00:08:43.118 9427.102 - 9477.514: 14.0737% ( 41) 00:08:43.118 9477.514 - 9527.926: 14.5346% ( 41) 00:08:43.118 9527.926 - 9578.338: 14.9955% ( 41) 00:08:43.118 9578.338 - 9628.751: 15.4002% ( 36) 00:08:43.118 9628.751 - 9679.163: 15.7262% ( 29) 00:08:43.118 9679.163 - 9729.575: 16.0522% ( 29) 00:08:43.118 9729.575 - 9779.988: 16.4119% ( 32) 00:08:43.118 9779.988 - 9830.400: 16.7266% ( 28) 00:08:43.118 9830.400 - 9880.812: 17.0863% ( 32) 00:08:43.118 9880.812 - 9931.225: 17.4573% ( 33) 00:08:43.118 9931.225 - 9981.637: 17.7608% ( 27) 00:08:43.118 9981.637 - 10032.049: 18.0418% ( 25) 00:08:43.118 10032.049 - 10082.462: 18.2779% ( 21) 00:08:43.118 10082.462 - 10132.874: 18.4577% ( 16) 00:08:43.118 10132.874 - 10183.286: 18.6938% ( 21) 00:08:43.119 10183.286 - 10233.698: 18.9074% ( 19) 00:08:43.119 10233.698 - 10284.111: 19.0647% ( 14) 00:08:43.119 10284.111 - 10334.523: 19.2109% ( 13) 00:08:43.119 10334.523 - 10384.935: 19.3121% ( 9) 00:08:43.119 10384.935 - 10435.348: 19.3907% ( 7) 00:08:43.119 10435.348 - 10485.760: 19.4469% ( 5) 00:08:43.119 10485.760 - 10536.172: 19.5706% ( 11) 00:08:43.119 10536.172 - 10586.585: 19.7055% ( 12) 00:08:43.119 10586.585 - 10636.997: 19.8629% ( 14) 00:08:43.119 10636.997 - 10687.409: 20.0540% ( 17) 00:08:43.119 10687.409 - 10737.822: 20.2451% ( 17) 00:08:43.119 10737.822 - 10788.234: 20.4924% ( 22) 00:08:43.119 10788.234 - 10838.646: 20.7397% ( 22) 00:08:43.119 10838.646 - 10889.058: 21.0432% ( 27) 00:08:43.119 10889.058 - 10939.471: 21.3354% ( 26) 00:08:43.119 10939.471 - 10989.883: 21.6277% ( 26) 00:08:43.119 10989.883 - 11040.295: 21.8525% ( 20) 00:08:43.119 11040.295 - 11090.708: 22.0998% ( 22) 00:08:43.119 11090.708 - 11141.120: 22.3584% ( 23) 00:08:43.119 11141.120 - 11191.532: 22.6506% ( 26) 00:08:43.119 11191.532 - 11241.945: 22.9429% ( 26) 00:08:43.119 11241.945 - 11292.357: 23.2239% ( 25) 00:08:43.119 11292.357 - 11342.769: 23.5499% ( 29) 00:08:43.119 11342.769 - 11393.182: 23.8534% ( 27) 00:08:43.119 11393.182 - 11443.594: 24.0895% ( 21) 00:08:43.119 11443.594 - 11494.006: 24.2918% ( 18) 00:08:43.119 11494.006 - 11544.418: 24.4492% ( 14) 00:08:43.119 11544.418 - 11594.831: 24.5953% ( 13) 00:08:43.119 11594.831 - 11645.243: 24.7527% ( 14) 00:08:43.119 11645.243 - 11695.655: 24.8651% ( 10) 00:08:43.119 11695.655 - 11746.068: 24.9888% ( 11) 00:08:43.119 11746.068 - 11796.480: 25.1237% ( 12) 00:08:43.119 11796.480 - 11846.892: 25.2473% ( 11) 00:08:43.119 11846.892 - 11897.305: 25.3710% ( 11) 00:08:43.119 11897.305 - 11947.717: 25.5171% ( 13) 00:08:43.119 11947.717 - 11998.129: 25.7082% ( 17) 00:08:43.119 11998.129 - 12048.542: 25.9105% ( 18) 00:08:43.119 12048.542 - 12098.954: 26.1691% ( 23) 00:08:43.119 12098.954 - 12149.366: 26.4051% ( 21) 00:08:43.119 12149.366 - 12199.778: 26.6299% ( 20) 00:08:43.119 12199.778 - 12250.191: 26.9222% ( 26) 00:08:43.119 12250.191 - 12300.603: 27.1920% ( 24) 00:08:43.119 12300.603 - 12351.015: 27.4843% ( 26) 00:08:43.119 12351.015 - 12401.428: 27.7316% ( 22) 00:08:43.119 12401.428 - 12451.840: 28.0126% ( 25) 00:08:43.119 12451.840 - 12502.252: 28.3049% ( 26) 00:08:43.119 12502.252 - 12552.665: 28.6983% ( 35) 00:08:43.119 12552.665 - 12603.077: 29.0917% ( 35) 00:08:43.119 12603.077 - 12653.489: 29.4739% ( 34) 00:08:43.119 12653.489 - 12703.902: 29.9236% ( 40) 00:08:43.119 12703.902 - 12754.314: 30.3957% ( 42) 00:08:43.119 12754.314 - 12804.726: 30.9465% ( 49) 00:08:43.119 12804.726 - 12855.138: 31.4861% ( 48) 00:08:43.119 12855.138 - 12905.551: 32.0144% ( 47) 00:08:43.119 12905.551 - 13006.375: 33.0373% ( 91) 00:08:43.119 13006.375 - 13107.200: 34.1614% ( 100) 00:08:43.119 13107.200 - 13208.025: 35.1619% ( 89) 00:08:43.119 13208.025 - 13308.849: 36.2860% ( 100) 00:08:43.119 13308.849 - 13409.674: 37.4550% ( 104) 00:08:43.119 13409.674 - 13510.498: 38.8489% ( 124) 00:08:43.119 13510.498 - 13611.323: 40.2878% ( 128) 00:08:43.119 13611.323 - 13712.148: 41.8165% ( 136) 00:08:43.119 13712.148 - 13812.972: 43.1879% ( 122) 00:08:43.119 13812.972 - 13913.797: 44.4694% ( 114) 00:08:43.119 13913.797 - 14014.622: 45.7397% ( 113) 00:08:43.119 14014.622 - 14115.446: 46.9537% ( 108) 00:08:43.119 14115.446 - 14216.271: 48.3925% ( 128) 00:08:43.119 14216.271 - 14317.095: 49.7527% ( 121) 00:08:43.119 14317.095 - 14417.920: 50.9667% ( 108) 00:08:43.119 14417.920 - 14518.745: 52.0346% ( 95) 00:08:43.119 14518.745 - 14619.569: 52.9227% ( 79) 00:08:43.119 14619.569 - 14720.394: 53.9906% ( 95) 00:08:43.119 14720.394 - 14821.218: 54.9460% ( 85) 00:08:43.119 14821.218 - 14922.043: 56.0139% ( 95) 00:08:43.119 14922.043 - 15022.868: 57.0481% ( 92) 00:08:43.119 15022.868 - 15123.692: 58.0823% ( 92) 00:08:43.119 15123.692 - 15224.517: 59.2513% ( 104) 00:08:43.119 15224.517 - 15325.342: 60.4092% ( 103) 00:08:43.119 15325.342 - 15426.166: 61.4433% ( 92) 00:08:43.119 15426.166 - 15526.991: 62.5112% ( 95) 00:08:43.119 15526.991 - 15627.815: 63.7478% ( 110) 00:08:43.119 15627.815 - 15728.640: 64.9281% ( 105) 00:08:43.119 15728.640 - 15829.465: 65.9735% ( 93) 00:08:43.119 15829.465 - 15930.289: 66.9177% ( 84) 00:08:43.119 15930.289 - 16031.114: 67.8282% ( 81) 00:08:43.119 16031.114 - 16131.938: 68.8512% ( 91) 00:08:43.119 16131.938 - 16232.763: 69.9303% ( 96) 00:08:43.119 16232.763 - 16333.588: 71.1219% ( 106) 00:08:43.119 16333.588 - 16434.412: 72.1673% ( 93) 00:08:43.119 16434.412 - 16535.237: 73.2689% ( 98) 00:08:43.119 16535.237 - 16636.062: 74.5054% ( 110) 00:08:43.119 16636.062 - 16736.886: 75.9330% ( 127) 00:08:43.119 16736.886 - 16837.711: 77.2032% ( 113) 00:08:43.119 16837.711 - 16938.535: 78.5971% ( 124) 00:08:43.119 16938.535 - 17039.360: 80.0135% ( 126) 00:08:43.119 17039.360 - 17140.185: 81.3287% ( 117) 00:08:43.119 17140.185 - 17241.009: 82.5989% ( 113) 00:08:43.119 17241.009 - 17341.834: 83.6331% ( 92) 00:08:43.119 17341.834 - 17442.658: 84.5886% ( 85) 00:08:43.119 17442.658 - 17543.483: 85.5328% ( 84) 00:08:43.119 17543.483 - 17644.308: 86.4321% ( 80) 00:08:43.119 17644.308 - 17745.132: 87.2527% ( 73) 00:08:43.119 17745.132 - 17845.957: 88.0171% ( 68) 00:08:43.119 17845.957 - 17946.782: 88.6803% ( 59) 00:08:43.119 17946.782 - 18047.606: 89.2648% ( 52) 00:08:43.119 18047.606 - 18148.431: 89.7932% ( 47) 00:08:43.119 18148.431 - 18249.255: 90.3103% ( 46) 00:08:43.119 18249.255 - 18350.080: 90.7936% ( 43) 00:08:43.119 18350.080 - 18450.905: 91.1308% ( 30) 00:08:43.119 18450.905 - 18551.729: 91.4231% ( 26) 00:08:43.119 18551.729 - 18652.554: 91.6592% ( 21) 00:08:43.119 18652.554 - 18753.378: 91.8615% ( 18) 00:08:43.119 18753.378 - 18854.203: 92.0751% ( 19) 00:08:43.119 18854.203 - 18955.028: 92.1875% ( 10) 00:08:43.119 18955.028 - 19055.852: 92.3224% ( 12) 00:08:43.119 19055.852 - 19156.677: 92.5022% ( 16) 00:08:43.119 19156.677 - 19257.502: 92.6709% ( 15) 00:08:43.119 19257.502 - 19358.326: 92.8282% ( 14) 00:08:43.119 19358.326 - 19459.151: 92.9856% ( 14) 00:08:43.119 19459.151 - 19559.975: 93.1430% ( 14) 00:08:43.119 19559.975 - 19660.800: 93.3004% ( 14) 00:08:43.119 19660.800 - 19761.625: 93.4915% ( 17) 00:08:43.119 19761.625 - 19862.449: 93.6601% ( 15) 00:08:43.119 19862.449 - 19963.274: 93.8849% ( 20) 00:08:43.119 19963.274 - 20064.098: 94.0535% ( 15) 00:08:43.119 20064.098 - 20164.923: 94.1210% ( 6) 00:08:43.119 20164.923 - 20265.748: 94.2671% ( 13) 00:08:43.119 20265.748 - 20366.572: 94.4245% ( 14) 00:08:43.119 20366.572 - 20467.397: 94.6268% ( 18) 00:08:43.119 20467.397 - 20568.222: 94.8629% ( 21) 00:08:43.119 20568.222 - 20669.046: 95.0540% ( 17) 00:08:43.119 20669.046 - 20769.871: 95.2001% ( 13) 00:08:43.119 20769.871 - 20870.695: 95.3799% ( 16) 00:08:43.119 20870.695 - 20971.520: 95.5710% ( 17) 00:08:43.119 20971.520 - 21072.345: 95.7284% ( 14) 00:08:43.119 21072.345 - 21173.169: 95.8858% ( 14) 00:08:43.119 21173.169 - 21273.994: 96.0544% ( 15) 00:08:43.119 21273.994 - 21374.818: 96.3129% ( 23) 00:08:43.119 21374.818 - 21475.643: 96.5378% ( 20) 00:08:43.119 21475.643 - 21576.468: 96.7513% ( 19) 00:08:43.119 21576.468 - 21677.292: 96.9537% ( 18) 00:08:43.119 21677.292 - 21778.117: 97.1560% ( 18) 00:08:43.119 21778.117 - 21878.942: 97.3022% ( 13) 00:08:43.119 21878.942 - 21979.766: 97.4258% ( 11) 00:08:43.119 21979.766 - 22080.591: 97.5382% ( 10) 00:08:43.119 22080.591 - 22181.415: 97.6281% ( 8) 00:08:43.119 22181.415 - 22282.240: 97.6956% ( 6) 00:08:43.119 22282.240 - 22383.065: 97.7518% ( 5) 00:08:43.119 22383.065 - 22483.889: 97.7968% ( 4) 00:08:43.119 22483.889 - 22584.714: 97.8417% ( 4) 00:08:43.119 22786.363 - 22887.188: 97.9766% ( 12) 00:08:43.119 22887.188 - 22988.012: 98.0890% ( 10) 00:08:43.119 22988.012 - 23088.837: 98.1228% ( 3) 00:08:43.119 23088.837 - 23189.662: 98.2127% ( 8) 00:08:43.119 23189.662 - 23290.486: 98.2914% ( 7) 00:08:43.119 23290.486 - 23391.311: 98.3588% ( 6) 00:08:43.119 23391.311 - 23492.135: 98.4150% ( 5) 00:08:43.119 23492.135 - 23592.960: 98.4937% ( 7) 00:08:43.119 23592.960 - 23693.785: 98.5612% ( 6) 00:08:43.119 27625.945 - 27827.594: 98.5724% ( 1) 00:08:43.120 27827.594 - 28029.243: 98.6511% ( 7) 00:08:43.120 28029.243 - 28230.892: 98.7522% ( 9) 00:08:43.120 28230.892 - 28432.542: 98.8422% ( 8) 00:08:43.120 28432.542 - 28634.191: 98.9321% ( 8) 00:08:43.120 28634.191 - 28835.840: 99.0220% ( 8) 00:08:43.120 28835.840 - 29037.489: 99.1120% ( 8) 00:08:43.120 29037.489 - 29239.138: 99.2019% ( 8) 00:08:43.120 29239.138 - 29440.788: 99.2806% ( 7) 00:08:43.120 37910.055 - 38111.705: 99.2918% ( 1) 00:08:43.120 38111.705 - 38313.354: 99.3817% ( 8) 00:08:43.120 38313.354 - 38515.003: 99.4604% ( 7) 00:08:43.120 38515.003 - 38716.652: 99.5504% ( 8) 00:08:43.120 38716.652 - 38918.302: 99.6290% ( 7) 00:08:43.120 38918.302 - 39119.951: 99.7190% ( 8) 00:08:43.120 39119.951 - 39321.600: 99.7977% ( 7) 00:08:43.120 39321.600 - 39523.249: 99.8876% ( 8) 00:08:43.120 39523.249 - 39724.898: 99.9775% ( 8) 00:08:43.120 39724.898 - 39926.548: 100.0000% ( 2) 00:08:43.120 00:08:43.120 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:08:43.120 ============================================================================== 00:08:43.120 Range in us Cumulative IO count 00:08:43.120 7914.732 - 7965.145: 0.0225% ( 2) 00:08:43.120 7965.145 - 8015.557: 0.0787% ( 5) 00:08:43.120 8015.557 - 8065.969: 0.1799% ( 9) 00:08:43.120 8065.969 - 8116.382: 0.2923% ( 10) 00:08:43.120 8116.382 - 8166.794: 0.5171% ( 20) 00:08:43.120 8166.794 - 8217.206: 0.8543% ( 30) 00:08:43.120 8217.206 - 8267.618: 1.1691% ( 28) 00:08:43.120 8267.618 - 8318.031: 1.5400% ( 33) 00:08:43.120 8318.031 - 8368.443: 1.9110% ( 33) 00:08:43.120 8368.443 - 8418.855: 2.3044% ( 35) 00:08:43.120 8418.855 - 8469.268: 2.6866% ( 34) 00:08:43.120 8469.268 - 8519.680: 3.1700% ( 43) 00:08:43.120 8519.680 - 8570.092: 3.6646% ( 44) 00:08:43.120 8570.092 - 8620.505: 4.1929% ( 47) 00:08:43.120 8620.505 - 8670.917: 4.7662% ( 51) 00:08:43.120 8670.917 - 8721.329: 5.3957% ( 56) 00:08:43.120 8721.329 - 8771.742: 6.0364% ( 57) 00:08:43.120 8771.742 - 8822.154: 6.7671% ( 65) 00:08:43.120 8822.154 - 8872.566: 7.4528% ( 61) 00:08:43.120 8872.566 - 8922.978: 8.1835% ( 65) 00:08:43.120 8922.978 - 8973.391: 8.8804% ( 62) 00:08:43.120 8973.391 - 9023.803: 9.4537% ( 51) 00:08:43.120 9023.803 - 9074.215: 10.0832% ( 56) 00:08:43.120 9074.215 - 9124.628: 10.6228% ( 48) 00:08:43.120 9124.628 - 9175.040: 11.2747% ( 58) 00:08:43.120 9175.040 - 9225.452: 11.8593% ( 52) 00:08:43.120 9225.452 - 9275.865: 12.4550% ( 53) 00:08:43.120 9275.865 - 9326.277: 13.0283% ( 51) 00:08:43.120 9326.277 - 9376.689: 13.6129% ( 52) 00:08:43.120 9376.689 - 9427.102: 14.1299% ( 46) 00:08:43.120 9427.102 - 9477.514: 14.5683% ( 39) 00:08:43.120 9477.514 - 9527.926: 14.9730% ( 36) 00:08:43.120 9527.926 - 9578.338: 15.3215% ( 31) 00:08:43.120 9578.338 - 9628.751: 15.6812% ( 32) 00:08:43.120 9628.751 - 9679.163: 15.9960% ( 28) 00:08:43.120 9679.163 - 9729.575: 16.2545% ( 23) 00:08:43.120 9729.575 - 9779.988: 16.4568% ( 18) 00:08:43.120 9779.988 - 9830.400: 16.6817% ( 20) 00:08:43.120 9830.400 - 9880.812: 16.9290% ( 22) 00:08:43.120 9880.812 - 9931.225: 17.1538% ( 20) 00:08:43.120 9931.225 - 9981.637: 17.4236% ( 24) 00:08:43.120 9981.637 - 10032.049: 17.6596% ( 21) 00:08:43.120 10032.049 - 10082.462: 17.8507% ( 17) 00:08:43.120 10082.462 - 10132.874: 17.9856% ( 12) 00:08:43.120 10132.874 - 10183.286: 18.1317% ( 13) 00:08:43.120 10183.286 - 10233.698: 18.3004% ( 15) 00:08:43.120 10233.698 - 10284.111: 18.4465% ( 13) 00:08:43.120 10284.111 - 10334.523: 18.6151% ( 15) 00:08:43.120 10334.523 - 10384.935: 18.7837% ( 15) 00:08:43.120 10384.935 - 10435.348: 18.9299% ( 13) 00:08:43.120 10435.348 - 10485.760: 19.0985% ( 15) 00:08:43.120 10485.760 - 10536.172: 19.2446% ( 13) 00:08:43.120 10536.172 - 10586.585: 19.5144% ( 24) 00:08:43.120 10586.585 - 10636.997: 19.8067% ( 26) 00:08:43.120 10636.997 - 10687.409: 20.0315% ( 20) 00:08:43.120 10687.409 - 10737.822: 20.2113% ( 16) 00:08:43.120 10737.822 - 10788.234: 20.4249% ( 19) 00:08:43.120 10788.234 - 10838.646: 20.7059% ( 25) 00:08:43.120 10838.646 - 10889.058: 20.9645% ( 23) 00:08:43.120 10889.058 - 10939.471: 21.2230% ( 23) 00:08:43.120 10939.471 - 10989.883: 21.4703% ( 22) 00:08:43.120 10989.883 - 11040.295: 21.7626% ( 26) 00:08:43.120 11040.295 - 11090.708: 22.0773% ( 28) 00:08:43.120 11090.708 - 11141.120: 22.3134% ( 21) 00:08:43.120 11141.120 - 11191.532: 22.5719% ( 23) 00:08:43.120 11191.532 - 11241.945: 22.8642% ( 26) 00:08:43.120 11241.945 - 11292.357: 23.1677% ( 27) 00:08:43.120 11292.357 - 11342.769: 23.4263% ( 23) 00:08:43.120 11342.769 - 11393.182: 23.7298% ( 27) 00:08:43.120 11393.182 - 11443.594: 24.0558% ( 29) 00:08:43.120 11443.594 - 11494.006: 24.3031% ( 22) 00:08:43.120 11494.006 - 11544.418: 24.5054% ( 18) 00:08:43.120 11544.418 - 11594.831: 24.7415% ( 21) 00:08:43.120 11594.831 - 11645.243: 24.9663% ( 20) 00:08:43.120 11645.243 - 11695.655: 25.1686% ( 18) 00:08:43.120 11695.655 - 11746.068: 25.3710% ( 18) 00:08:43.120 11746.068 - 11796.480: 25.5733% ( 18) 00:08:43.120 11796.480 - 11846.892: 25.7194% ( 13) 00:08:43.120 11846.892 - 11897.305: 25.8656% ( 13) 00:08:43.120 11897.305 - 11947.717: 26.0117% ( 13) 00:08:43.120 11947.717 - 11998.129: 26.2140% ( 18) 00:08:43.120 11998.129 - 12048.542: 26.4838% ( 24) 00:08:43.120 12048.542 - 12098.954: 26.7311% ( 22) 00:08:43.120 12098.954 - 12149.366: 26.9559% ( 20) 00:08:43.120 12149.366 - 12199.778: 27.2032% ( 22) 00:08:43.120 12199.778 - 12250.191: 27.4393% ( 21) 00:08:43.120 12250.191 - 12300.603: 27.6866% ( 22) 00:08:43.120 12300.603 - 12351.015: 27.9451% ( 23) 00:08:43.120 12351.015 - 12401.428: 28.2487% ( 27) 00:08:43.120 12401.428 - 12451.840: 28.5297% ( 25) 00:08:43.120 12451.840 - 12502.252: 28.7770% ( 22) 00:08:43.120 12502.252 - 12552.665: 29.0805% ( 27) 00:08:43.120 12552.665 - 12603.077: 29.4402% ( 32) 00:08:43.120 12603.077 - 12653.489: 29.8674% ( 38) 00:08:43.120 12653.489 - 12703.902: 30.2383% ( 33) 00:08:43.120 12703.902 - 12754.314: 30.6767% ( 39) 00:08:43.120 12754.314 - 12804.726: 31.2275% ( 49) 00:08:43.120 12804.726 - 12855.138: 31.8458% ( 55) 00:08:43.120 12855.138 - 12905.551: 32.3853% ( 48) 00:08:43.120 12905.551 - 13006.375: 33.4982% ( 99) 00:08:43.120 13006.375 - 13107.200: 34.6111% ( 99) 00:08:43.120 13107.200 - 13208.025: 35.8251% ( 108) 00:08:43.120 13208.025 - 13308.849: 37.2077% ( 123) 00:08:43.120 13308.849 - 13409.674: 38.4218% ( 108) 00:08:43.120 13409.674 - 13510.498: 39.8044% ( 123) 00:08:43.120 13510.498 - 13611.323: 41.2208% ( 126) 00:08:43.120 13611.323 - 13712.148: 42.5922% ( 122) 00:08:43.120 13712.148 - 13812.972: 43.9411% ( 120) 00:08:43.120 13812.972 - 13913.797: 45.1664% ( 109) 00:08:43.120 13913.797 - 14014.622: 46.3804% ( 108) 00:08:43.120 14014.622 - 14115.446: 47.5270% ( 102) 00:08:43.120 14115.446 - 14216.271: 48.5162% ( 88) 00:08:43.120 14216.271 - 14317.095: 49.5953% ( 96) 00:08:43.120 14317.095 - 14417.920: 50.6632% ( 95) 00:08:43.120 14417.920 - 14518.745: 51.6524% ( 88) 00:08:43.120 14518.745 - 14619.569: 52.5629% ( 81) 00:08:43.120 14619.569 - 14720.394: 53.6196% ( 94) 00:08:43.120 14720.394 - 14821.218: 54.6650% ( 93) 00:08:43.120 14821.218 - 14922.043: 55.9577% ( 115) 00:08:43.120 14922.043 - 15022.868: 57.0481% ( 97) 00:08:43.120 15022.868 - 15123.692: 58.1048% ( 94) 00:08:43.120 15123.692 - 15224.517: 59.2401% ( 101) 00:08:43.120 15224.517 - 15325.342: 60.5553% ( 117) 00:08:43.120 15325.342 - 15426.166: 62.0391% ( 132) 00:08:43.120 15426.166 - 15526.991: 63.3094% ( 113) 00:08:43.120 15526.991 - 15627.815: 64.4110% ( 98) 00:08:43.120 15627.815 - 15728.640: 65.4114% ( 89) 00:08:43.120 15728.640 - 15829.465: 66.3107% ( 80) 00:08:43.120 15829.465 - 15930.289: 67.1650% ( 76) 00:08:43.120 15930.289 - 16031.114: 68.0531% ( 79) 00:08:43.120 16031.114 - 16131.938: 68.8512% ( 71) 00:08:43.120 16131.938 - 16232.763: 69.6268% ( 69) 00:08:43.120 16232.763 - 16333.588: 70.3687% ( 66) 00:08:43.120 16333.588 - 16434.412: 71.2118% ( 75) 00:08:43.120 16434.412 - 16535.237: 72.2010% ( 88) 00:08:43.120 16535.237 - 16636.062: 73.5836% ( 123) 00:08:43.120 16636.062 - 16736.886: 74.9438% ( 121) 00:08:43.120 16736.886 - 16837.711: 76.2815% ( 119) 00:08:43.120 16837.711 - 16938.535: 77.6754% ( 124) 00:08:43.120 16938.535 - 17039.360: 79.3165% ( 146) 00:08:43.120 17039.360 - 17140.185: 80.8790% ( 139) 00:08:43.120 17140.185 - 17241.009: 82.3067% ( 127) 00:08:43.120 17241.009 - 17341.834: 83.6893% ( 123) 00:08:43.120 17341.834 - 17442.658: 84.9483% ( 112) 00:08:43.120 17442.658 - 17543.483: 86.0724% ( 100) 00:08:43.120 17543.483 - 17644.308: 87.0504% ( 87) 00:08:43.120 17644.308 - 17745.132: 87.7248% ( 60) 00:08:43.120 17745.132 - 17845.957: 88.4105% ( 61) 00:08:43.120 17845.957 - 17946.782: 89.1187% ( 63) 00:08:43.120 17946.782 - 18047.606: 89.5796% ( 41) 00:08:43.120 18047.606 - 18148.431: 90.0742% ( 44) 00:08:43.120 18148.431 - 18249.255: 90.5351% ( 41) 00:08:43.120 18249.255 - 18350.080: 90.9285% ( 35) 00:08:43.120 18350.080 - 18450.905: 91.2320% ( 27) 00:08:43.120 18450.905 - 18551.729: 91.5468% ( 28) 00:08:43.120 18551.729 - 18652.554: 91.7828% ( 21) 00:08:43.120 18652.554 - 18753.378: 92.0076% ( 20) 00:08:43.120 18753.378 - 18854.203: 92.1650% ( 14) 00:08:43.121 18854.203 - 18955.028: 92.3449% ( 16) 00:08:43.121 18955.028 - 19055.852: 92.5360% ( 17) 00:08:43.121 19055.852 - 19156.677: 92.6596% ( 11) 00:08:43.121 19156.677 - 19257.502: 92.7720% ( 10) 00:08:43.121 19257.502 - 19358.326: 92.8957% ( 11) 00:08:43.121 19358.326 - 19459.151: 93.1205% ( 20) 00:08:43.121 19459.151 - 19559.975: 93.2217% ( 9) 00:08:43.121 19559.975 - 19660.800: 93.3566% ( 12) 00:08:43.121 19660.800 - 19761.625: 93.5139% ( 14) 00:08:43.121 19761.625 - 19862.449: 93.6488% ( 12) 00:08:43.121 19862.449 - 19963.274: 93.7388% ( 8) 00:08:43.121 19963.274 - 20064.098: 93.8062% ( 6) 00:08:43.121 20064.098 - 20164.923: 93.8961% ( 8) 00:08:43.121 20164.923 - 20265.748: 94.0985% ( 18) 00:08:43.121 20265.748 - 20366.572: 94.3458% ( 22) 00:08:43.121 20366.572 - 20467.397: 94.6043% ( 23) 00:08:43.121 20467.397 - 20568.222: 94.8067% ( 18) 00:08:43.121 20568.222 - 20669.046: 94.9528% ( 13) 00:08:43.121 20669.046 - 20769.871: 95.1551% ( 18) 00:08:43.121 20769.871 - 20870.695: 95.3462% ( 17) 00:08:43.121 20870.695 - 20971.520: 95.5823% ( 21) 00:08:43.121 20971.520 - 21072.345: 95.8521% ( 24) 00:08:43.121 21072.345 - 21173.169: 96.1556% ( 27) 00:08:43.121 21173.169 - 21273.994: 96.3916% ( 21) 00:08:43.121 21273.994 - 21374.818: 96.6052% ( 19) 00:08:43.121 21374.818 - 21475.643: 96.7738% ( 15) 00:08:43.121 21475.643 - 21576.468: 96.8862% ( 10) 00:08:43.121 21576.468 - 21677.292: 97.0099% ( 11) 00:08:43.121 21677.292 - 21778.117: 97.1560% ( 13) 00:08:43.121 21778.117 - 21878.942: 97.3134% ( 14) 00:08:43.121 21878.942 - 21979.766: 97.4820% ( 15) 00:08:43.121 21979.766 - 22080.591: 97.5832% ( 9) 00:08:43.121 22080.591 - 22181.415: 97.6506% ( 6) 00:08:43.121 22181.415 - 22282.240: 97.7293% ( 7) 00:08:43.121 22282.240 - 22383.065: 97.7968% ( 6) 00:08:43.121 22383.065 - 22483.889: 97.8417% ( 4) 00:08:43.121 22887.188 - 22988.012: 97.9654% ( 11) 00:08:43.121 22988.012 - 23088.837: 98.0216% ( 5) 00:08:43.121 23088.837 - 23189.662: 98.1003% ( 7) 00:08:43.121 23189.662 - 23290.486: 98.1790% ( 7) 00:08:43.121 23290.486 - 23391.311: 98.2464% ( 6) 00:08:43.121 23391.311 - 23492.135: 98.3251% ( 7) 00:08:43.121 23492.135 - 23592.960: 98.4038% ( 7) 00:08:43.121 23592.960 - 23693.785: 98.4825% ( 7) 00:08:43.121 23693.785 - 23794.609: 98.5499% ( 6) 00:08:43.121 23794.609 - 23895.434: 98.5612% ( 1) 00:08:43.121 26214.400 - 26416.049: 98.6286% ( 6) 00:08:43.121 26416.049 - 26617.698: 98.7073% ( 7) 00:08:43.121 26617.698 - 26819.348: 98.8085% ( 9) 00:08:43.121 26819.348 - 27020.997: 98.8984% ( 8) 00:08:43.121 27020.997 - 27222.646: 98.9883% ( 8) 00:08:43.121 27222.646 - 27424.295: 99.0782% ( 8) 00:08:43.121 27424.295 - 27625.945: 99.1682% ( 8) 00:08:43.121 27625.945 - 27827.594: 99.2581% ( 8) 00:08:43.121 27827.594 - 28029.243: 99.2806% ( 2) 00:08:43.121 37103.458 - 37305.108: 99.3143% ( 3) 00:08:43.121 37305.108 - 37506.757: 99.3930% ( 7) 00:08:43.121 37506.757 - 37708.406: 99.4492% ( 5) 00:08:43.121 37708.406 - 37910.055: 99.5279% ( 7) 00:08:43.121 37910.055 - 38111.705: 99.6178% ( 8) 00:08:43.121 38111.705 - 38313.354: 99.7077% ( 8) 00:08:43.121 38313.354 - 38515.003: 99.7977% ( 8) 00:08:43.121 38515.003 - 38716.652: 99.8763% ( 7) 00:08:43.121 38716.652 - 38918.302: 99.9663% ( 8) 00:08:43.121 38918.302 - 39119.951: 100.0000% ( 3) 00:08:43.121 00:08:43.121 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:08:43.121 ============================================================================== 00:08:43.121 Range in us Cumulative IO count 00:08:43.121 7914.732 - 7965.145: 0.0335% ( 3) 00:08:43.121 7965.145 - 8015.557: 0.0781% ( 4) 00:08:43.121 8015.557 - 8065.969: 0.2009% ( 11) 00:08:43.121 8065.969 - 8116.382: 0.3013% ( 9) 00:08:43.121 8116.382 - 8166.794: 0.4799% ( 16) 00:08:43.121 8166.794 - 8217.206: 0.7478% ( 24) 00:08:43.121 8217.206 - 8267.618: 0.9933% ( 22) 00:08:43.121 8267.618 - 8318.031: 1.3058% ( 28) 00:08:43.121 8318.031 - 8368.443: 1.7076% ( 36) 00:08:43.121 8368.443 - 8418.855: 2.1205% ( 37) 00:08:43.121 8418.855 - 8469.268: 2.6674% ( 49) 00:08:43.121 8469.268 - 8519.680: 3.2254% ( 50) 00:08:43.121 8519.680 - 8570.092: 3.7946% ( 51) 00:08:43.121 8570.092 - 8620.505: 4.3862% ( 53) 00:08:43.121 8620.505 - 8670.917: 5.0000% ( 55) 00:08:43.121 8670.917 - 8721.329: 5.5469% ( 49) 00:08:43.121 8721.329 - 8771.742: 6.1496% ( 54) 00:08:43.121 8771.742 - 8822.154: 6.8304% ( 61) 00:08:43.121 8822.154 - 8872.566: 7.5446% ( 64) 00:08:43.121 8872.566 - 8922.978: 8.1920% ( 58) 00:08:43.121 8922.978 - 8973.391: 8.8728% ( 61) 00:08:43.121 8973.391 - 9023.803: 9.5759% ( 63) 00:08:43.121 9023.803 - 9074.215: 10.2902% ( 64) 00:08:43.121 9074.215 - 9124.628: 10.9375% ( 58) 00:08:43.121 9124.628 - 9175.040: 11.6071% ( 60) 00:08:43.121 9175.040 - 9225.452: 12.2098% ( 54) 00:08:43.121 9225.452 - 9275.865: 12.8013% ( 53) 00:08:43.121 9275.865 - 9326.277: 13.2812% ( 43) 00:08:43.121 9326.277 - 9376.689: 13.7723% ( 44) 00:08:43.121 9376.689 - 9427.102: 14.1964% ( 38) 00:08:43.121 9427.102 - 9477.514: 14.5647% ( 33) 00:08:43.121 9477.514 - 9527.926: 14.8996% ( 30) 00:08:43.121 9527.926 - 9578.338: 15.2344% ( 30) 00:08:43.121 9578.338 - 9628.751: 15.5246% ( 26) 00:08:43.121 9628.751 - 9679.163: 15.9152% ( 35) 00:08:43.121 9679.163 - 9729.575: 16.1272% ( 19) 00:08:43.121 9729.575 - 9779.988: 16.3504% ( 20) 00:08:43.121 9779.988 - 9830.400: 16.6071% ( 23) 00:08:43.121 9830.400 - 9880.812: 16.8192% ( 19) 00:08:43.121 9880.812 - 9931.225: 17.0424% ( 20) 00:08:43.121 9931.225 - 9981.637: 17.3103% ( 24) 00:08:43.121 9981.637 - 10032.049: 17.5670% ( 23) 00:08:43.121 10032.049 - 10082.462: 17.7790% ( 19) 00:08:43.121 10082.462 - 10132.874: 17.9018% ( 11) 00:08:43.121 10132.874 - 10183.286: 18.0804% ( 16) 00:08:43.121 10183.286 - 10233.698: 18.2366% ( 14) 00:08:43.121 10233.698 - 10284.111: 18.4152% ( 16) 00:08:43.121 10284.111 - 10334.523: 18.5603% ( 13) 00:08:43.121 10334.523 - 10384.935: 18.7054% ( 13) 00:08:43.121 10384.935 - 10435.348: 18.8504% ( 13) 00:08:43.121 10435.348 - 10485.760: 19.0067% ( 14) 00:08:43.121 10485.760 - 10536.172: 19.1853% ( 16) 00:08:43.121 10536.172 - 10586.585: 19.3750% ( 17) 00:08:43.121 10586.585 - 10636.997: 19.6652% ( 26) 00:08:43.121 10636.997 - 10687.409: 19.9330% ( 24) 00:08:43.121 10687.409 - 10737.822: 20.1897% ( 23) 00:08:43.121 10737.822 - 10788.234: 20.4353% ( 22) 00:08:43.121 10788.234 - 10838.646: 20.7254% ( 26) 00:08:43.121 10838.646 - 10889.058: 20.9040% ( 16) 00:08:43.121 10889.058 - 10939.471: 21.0826% ( 16) 00:08:43.121 10939.471 - 10989.883: 21.3058% ( 20) 00:08:43.121 10989.883 - 11040.295: 21.5067% ( 18) 00:08:43.121 11040.295 - 11090.708: 21.6964% ( 17) 00:08:43.121 11090.708 - 11141.120: 21.8638% ( 15) 00:08:43.121 11141.120 - 11191.532: 22.0982% ( 21) 00:08:43.121 11191.532 - 11241.945: 22.4330% ( 30) 00:08:43.121 11241.945 - 11292.357: 22.7121% ( 25) 00:08:43.121 11292.357 - 11342.769: 22.9464% ( 21) 00:08:43.121 11342.769 - 11393.182: 23.1920% ( 22) 00:08:43.121 11393.182 - 11443.594: 23.4040% ( 19) 00:08:43.121 11443.594 - 11494.006: 23.6607% ( 23) 00:08:43.121 11494.006 - 11544.418: 23.8951% ( 21) 00:08:43.121 11544.418 - 11594.831: 24.1183% ( 20) 00:08:43.121 11594.831 - 11645.243: 24.3415% ( 20) 00:08:43.121 11645.243 - 11695.655: 24.5647% ( 20) 00:08:43.121 11695.655 - 11746.068: 24.8103% ( 22) 00:08:43.121 11746.068 - 11796.480: 25.0112% ( 18) 00:08:43.121 11796.480 - 11846.892: 25.2232% ( 19) 00:08:43.121 11846.892 - 11897.305: 25.4353% ( 19) 00:08:43.121 11897.305 - 11947.717: 25.6585% ( 20) 00:08:43.121 11947.717 - 11998.129: 25.8929% ( 21) 00:08:43.121 11998.129 - 12048.542: 26.1719% ( 25) 00:08:43.121 12048.542 - 12098.954: 26.4844% ( 28) 00:08:43.121 12098.954 - 12149.366: 26.7522% ( 24) 00:08:43.121 12149.366 - 12199.778: 27.0312% ( 25) 00:08:43.121 12199.778 - 12250.191: 27.3103% ( 25) 00:08:43.121 12250.191 - 12300.603: 27.6228% ( 28) 00:08:43.121 12300.603 - 12351.015: 28.0246% ( 36) 00:08:43.121 12351.015 - 12401.428: 28.3705% ( 31) 00:08:43.121 12401.428 - 12451.840: 28.7835% ( 37) 00:08:43.121 12451.840 - 12502.252: 29.1629% ( 34) 00:08:43.121 12502.252 - 12552.665: 29.5424% ( 34) 00:08:43.121 12552.665 - 12603.077: 29.9554% ( 37) 00:08:43.121 12603.077 - 12653.489: 30.3795% ( 38) 00:08:43.121 12653.489 - 12703.902: 30.7254% ( 31) 00:08:43.121 12703.902 - 12754.314: 31.1830% ( 41) 00:08:43.121 12754.314 - 12804.726: 31.6406% ( 41) 00:08:43.121 12804.726 - 12855.138: 32.1987% ( 50) 00:08:43.121 12855.138 - 12905.551: 32.7567% ( 50) 00:08:43.121 12905.551 - 13006.375: 33.8616% ( 99) 00:08:43.121 13006.375 - 13107.200: 34.9442% ( 97) 00:08:43.121 13107.200 - 13208.025: 36.1942% ( 112) 00:08:43.121 13208.025 - 13308.849: 37.3884% ( 107) 00:08:43.121 13308.849 - 13409.674: 38.6161% ( 110) 00:08:43.121 13409.674 - 13510.498: 39.7768% ( 104) 00:08:43.121 13510.498 - 13611.323: 41.0045% ( 110) 00:08:43.121 13611.323 - 13712.148: 42.1429% ( 102) 00:08:43.121 13712.148 - 13812.972: 43.3371% ( 107) 00:08:43.121 13812.972 - 13913.797: 44.3192% ( 88) 00:08:43.121 13913.797 - 14014.622: 45.4129% ( 98) 00:08:43.121 14014.622 - 14115.446: 46.5513% ( 102) 00:08:43.121 14115.446 - 14216.271: 47.8125% ( 113) 00:08:43.121 14216.271 - 14317.095: 48.9732% ( 104) 00:08:43.121 14317.095 - 14417.920: 50.0670% ( 98) 00:08:43.121 14417.920 - 14518.745: 51.1607% ( 98) 00:08:43.121 14518.745 - 14619.569: 52.3214% ( 104) 00:08:43.121 14619.569 - 14720.394: 53.5491% ( 110) 00:08:43.121 14720.394 - 14821.218: 54.6317% ( 97) 00:08:43.121 14821.218 - 14922.043: 55.6250% ( 89) 00:08:43.121 14922.043 - 15022.868: 56.9866% ( 122) 00:08:43.122 15022.868 - 15123.692: 58.0357% ( 94) 00:08:43.122 15123.692 - 15224.517: 58.9732% ( 84) 00:08:43.122 15224.517 - 15325.342: 60.0335% ( 95) 00:08:43.122 15325.342 - 15426.166: 61.0603% ( 92) 00:08:43.122 15426.166 - 15526.991: 61.9308% ( 78) 00:08:43.122 15526.991 - 15627.815: 62.8571% ( 83) 00:08:43.122 15627.815 - 15728.640: 63.7723% ( 82) 00:08:43.122 15728.640 - 15829.465: 64.9665% ( 107) 00:08:43.122 15829.465 - 15930.289: 66.2723% ( 117) 00:08:43.122 15930.289 - 16031.114: 67.4442% ( 105) 00:08:43.122 16031.114 - 16131.938: 68.4933% ( 94) 00:08:43.122 16131.938 - 16232.763: 69.6540% ( 104) 00:08:43.122 16232.763 - 16333.588: 70.7143% ( 95) 00:08:43.122 16333.588 - 16434.412: 71.9308% ( 109) 00:08:43.122 16434.412 - 16535.237: 73.1138% ( 106) 00:08:43.122 16535.237 - 16636.062: 74.4643% ( 121) 00:08:43.122 16636.062 - 16736.886: 75.9263% ( 131) 00:08:43.122 16736.886 - 16837.711: 77.2879% ( 122) 00:08:43.122 16837.711 - 16938.535: 78.4375% ( 103) 00:08:43.122 16938.535 - 17039.360: 79.4754% ( 93) 00:08:43.122 17039.360 - 17140.185: 80.5580% ( 97) 00:08:43.122 17140.185 - 17241.009: 81.7522% ( 107) 00:08:43.122 17241.009 - 17341.834: 82.9688% ( 109) 00:08:43.122 17341.834 - 17442.658: 83.9621% ( 89) 00:08:43.122 17442.658 - 17543.483: 85.0670% ( 99) 00:08:43.122 17543.483 - 17644.308: 86.1161% ( 94) 00:08:43.122 17644.308 - 17745.132: 87.0201% ( 81) 00:08:43.122 17745.132 - 17845.957: 87.9353% ( 82) 00:08:43.122 17845.957 - 17946.782: 88.9174% ( 88) 00:08:43.122 17946.782 - 18047.606: 89.8326% ( 82) 00:08:43.122 18047.606 - 18148.431: 90.4576% ( 56) 00:08:43.122 18148.431 - 18249.255: 90.9821% ( 47) 00:08:43.122 18249.255 - 18350.080: 91.3839% ( 36) 00:08:43.122 18350.080 - 18450.905: 91.7522% ( 33) 00:08:43.122 18450.905 - 18551.729: 92.1317% ( 34) 00:08:43.122 18551.729 - 18652.554: 92.4777% ( 31) 00:08:43.122 18652.554 - 18753.378: 92.7679% ( 26) 00:08:43.122 18753.378 - 18854.203: 92.9799% ( 19) 00:08:43.122 18854.203 - 18955.028: 93.1696% ( 17) 00:08:43.122 18955.028 - 19055.852: 93.3147% ( 13) 00:08:43.122 19055.852 - 19156.677: 93.4821% ( 15) 00:08:43.122 19156.677 - 19257.502: 93.5491% ( 6) 00:08:43.122 19257.502 - 19358.326: 93.6384% ( 8) 00:08:43.122 19358.326 - 19459.151: 93.7277% ( 8) 00:08:43.122 19459.151 - 19559.975: 93.7946% ( 6) 00:08:43.122 19559.975 - 19660.800: 93.8839% ( 8) 00:08:43.122 19660.800 - 19761.625: 93.9844% ( 9) 00:08:43.122 19761.625 - 19862.449: 94.2411% ( 23) 00:08:43.122 19862.449 - 19963.274: 94.4643% ( 20) 00:08:43.122 19963.274 - 20064.098: 94.7321% ( 24) 00:08:43.122 20064.098 - 20164.923: 94.9888% ( 23) 00:08:43.122 20164.923 - 20265.748: 95.3125% ( 29) 00:08:43.122 20265.748 - 20366.572: 95.6362% ( 29) 00:08:43.122 20366.572 - 20467.397: 95.9487% ( 28) 00:08:43.122 20467.397 - 20568.222: 96.2500% ( 27) 00:08:43.122 20568.222 - 20669.046: 96.5290% ( 25) 00:08:43.122 20669.046 - 20769.871: 96.8192% ( 26) 00:08:43.122 20769.871 - 20870.695: 97.0201% ( 18) 00:08:43.122 20870.695 - 20971.520: 97.1540% ( 12) 00:08:43.122 20971.520 - 21072.345: 97.2656% ( 10) 00:08:43.122 21072.345 - 21173.169: 97.3996% ( 12) 00:08:43.122 21173.169 - 21273.994: 97.5223% ( 11) 00:08:43.122 21273.994 - 21374.818: 97.6451% ( 11) 00:08:43.122 21374.818 - 21475.643: 97.7232% ( 7) 00:08:43.122 21475.643 - 21576.468: 97.8013% ( 7) 00:08:43.122 21576.468 - 21677.292: 97.8571% ( 5) 00:08:43.122 21878.942 - 21979.766: 97.9241% ( 6) 00:08:43.122 21979.766 - 22080.591: 97.9911% ( 6) 00:08:43.122 22080.591 - 22181.415: 98.0580% ( 6) 00:08:43.122 22181.415 - 22282.240: 98.1250% ( 6) 00:08:43.122 22282.240 - 22383.065: 98.2031% ( 7) 00:08:43.122 22383.065 - 22483.889: 98.2589% ( 5) 00:08:43.122 22483.889 - 22584.714: 98.3259% ( 6) 00:08:43.122 22584.714 - 22685.538: 98.4040% ( 7) 00:08:43.122 22685.538 - 22786.363: 98.4710% ( 6) 00:08:43.122 22786.363 - 22887.188: 98.5379% ( 6) 00:08:43.122 22887.188 - 22988.012: 98.5714% ( 3) 00:08:43.122 23189.662 - 23290.486: 98.6161% ( 4) 00:08:43.122 23290.486 - 23391.311: 98.6830% ( 6) 00:08:43.122 23391.311 - 23492.135: 98.7723% ( 8) 00:08:43.122 23492.135 - 23592.960: 98.8058% ( 3) 00:08:43.122 23592.960 - 23693.785: 98.8839% ( 7) 00:08:43.122 23693.785 - 23794.609: 98.9621% ( 7) 00:08:43.122 23794.609 - 23895.434: 99.0402% ( 7) 00:08:43.122 23895.434 - 23996.258: 99.1071% ( 6) 00:08:43.122 23996.258 - 24097.083: 99.1853% ( 7) 00:08:43.122 24097.083 - 24197.908: 99.2634% ( 7) 00:08:43.122 24197.908 - 24298.732: 99.2857% ( 2) 00:08:43.122 26617.698 - 26819.348: 99.2969% ( 1) 00:08:43.122 26819.348 - 27020.997: 99.3415% ( 4) 00:08:43.122 27020.997 - 27222.646: 99.4085% ( 6) 00:08:43.122 27222.646 - 27424.295: 99.4978% ( 8) 00:08:43.122 27424.295 - 27625.945: 99.5759% ( 7) 00:08:43.122 27625.945 - 27827.594: 99.6540% ( 7) 00:08:43.122 27827.594 - 28029.243: 99.7433% ( 8) 00:08:43.122 28029.243 - 28230.892: 99.8326% ( 8) 00:08:43.122 28230.892 - 28432.542: 99.9107% ( 7) 00:08:43.122 28432.542 - 28634.191: 100.0000% ( 8) 00:08:43.122 00:08:43.122 04:25:39 nvme.nvme_perf -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:08:44.066 Initializing NVMe Controllers 00:08:44.066 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:44.066 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:44.066 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:44.066 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:44.066 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:08:44.066 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:08:44.066 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:08:44.066 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:08:44.066 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:08:44.066 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:08:44.066 Initialization complete. Launching workers. 00:08:44.066 ======================================================== 00:08:44.066 Latency(us) 00:08:44.066 Device Information : IOPS MiB/s Average min max 00:08:44.066 PCIE (0000:00:13.0) NSID 1 from core 0: 7380.25 86.49 17400.60 9757.08 39529.16 00:08:44.066 PCIE (0000:00:10.0) NSID 1 from core 0: 7380.25 86.49 17382.54 9572.29 38370.93 00:08:44.066 PCIE (0000:00:11.0) NSID 1 from core 0: 7380.25 86.49 17363.79 9523.46 36724.08 00:08:44.066 PCIE (0000:00:12.0) NSID 1 from core 0: 7380.25 86.49 17345.96 9863.82 36178.93 00:08:44.066 PCIE (0000:00:12.0) NSID 2 from core 0: 7380.25 86.49 17328.02 9969.74 34833.76 00:08:44.066 PCIE (0000:00:12.0) NSID 3 from core 0: 7443.87 87.23 17162.37 9411.19 27233.63 00:08:44.066 ======================================================== 00:08:44.066 Total : 44345.12 519.67 17330.30 9411.19 39529.16 00:08:44.066 00:08:44.066 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:08:44.066 ================================================================================= 00:08:44.066 1.00000% : 10485.760us 00:08:44.066 10.00000% : 12401.428us 00:08:44.066 25.00000% : 17241.009us 00:08:44.066 50.00000% : 18047.606us 00:08:44.066 75.00000% : 18753.378us 00:08:44.066 90.00000% : 19459.151us 00:08:44.066 95.00000% : 20064.098us 00:08:44.066 98.00000% : 20870.695us 00:08:44.066 99.00000% : 31658.929us 00:08:44.066 99.50000% : 38716.652us 00:08:44.066 99.90000% : 39523.249us 00:08:44.066 99.99000% : 39724.898us 00:08:44.066 99.99900% : 39724.898us 00:08:44.066 99.99990% : 39724.898us 00:08:44.066 99.99999% : 39724.898us 00:08:44.066 00:08:44.066 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:08:44.066 ================================================================================= 00:08:44.066 1.00000% : 10435.348us 00:08:44.066 10.00000% : 12401.428us 00:08:44.066 25.00000% : 16837.711us 00:08:44.066 50.00000% : 17845.957us 00:08:44.066 75.00000% : 19055.852us 00:08:44.066 90.00000% : 19862.449us 00:08:44.066 95.00000% : 20265.748us 00:08:44.066 98.00000% : 21576.468us 00:08:44.066 99.00000% : 30449.034us 00:08:44.066 99.50000% : 37506.757us 00:08:44.066 99.90000% : 38313.354us 00:08:44.066 99.99000% : 38515.003us 00:08:44.066 99.99900% : 38515.003us 00:08:44.066 99.99990% : 38515.003us 00:08:44.066 99.99999% : 38515.003us 00:08:44.066 00:08:44.066 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:08:44.066 ================================================================================= 00:08:44.066 1.00000% : 10233.698us 00:08:44.066 10.00000% : 12451.840us 00:08:44.066 25.00000% : 17140.185us 00:08:44.066 50.00000% : 18047.606us 00:08:44.066 75.00000% : 18753.378us 00:08:44.066 90.00000% : 19559.975us 00:08:44.066 95.00000% : 20164.923us 00:08:44.066 98.00000% : 21374.818us 00:08:44.066 99.00000% : 28634.191us 00:08:44.066 99.50000% : 35893.563us 00:08:44.066 99.90000% : 36700.160us 00:08:44.066 99.99000% : 36901.809us 00:08:44.066 99.99900% : 36901.809us 00:08:44.066 99.99990% : 36901.809us 00:08:44.066 99.99999% : 36901.809us 00:08:44.066 00:08:44.066 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:08:44.066 ================================================================================= 00:08:44.066 1.00000% : 10334.523us 00:08:44.066 10.00000% : 12401.428us 00:08:44.066 25.00000% : 17140.185us 00:08:44.066 50.00000% : 18148.431us 00:08:44.066 75.00000% : 18753.378us 00:08:44.066 90.00000% : 19459.151us 00:08:44.066 95.00000% : 20064.098us 00:08:44.066 98.00000% : 21072.345us 00:08:44.066 99.00000% : 28230.892us 00:08:44.066 99.50000% : 35288.615us 00:08:44.066 99.90000% : 36095.212us 00:08:44.066 99.99000% : 36296.862us 00:08:44.066 99.99900% : 36296.862us 00:08:44.066 99.99990% : 36296.862us 00:08:44.066 99.99999% : 36296.862us 00:08:44.066 00:08:44.066 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:08:44.066 ================================================================================= 00:08:44.066 1.00000% : 10435.348us 00:08:44.066 10.00000% : 12401.428us 00:08:44.066 25.00000% : 17241.009us 00:08:44.066 50.00000% : 18047.606us 00:08:44.066 75.00000% : 18753.378us 00:08:44.066 90.00000% : 19459.151us 00:08:44.066 95.00000% : 20064.098us 00:08:44.331 98.00000% : 21072.345us 00:08:44.331 99.00000% : 27020.997us 00:08:44.331 99.50000% : 34078.720us 00:08:44.331 99.90000% : 34683.668us 00:08:44.331 99.99000% : 34885.317us 00:08:44.331 99.99900% : 34885.317us 00:08:44.331 99.99990% : 34885.317us 00:08:44.331 99.99999% : 34885.317us 00:08:44.331 00:08:44.331 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:08:44.331 ================================================================================= 00:08:44.331 1.00000% : 10233.698us 00:08:44.331 10.00000% : 12401.428us 00:08:44.331 25.00000% : 17039.360us 00:08:44.331 50.00000% : 18047.606us 00:08:44.331 75.00000% : 18753.378us 00:08:44.331 90.00000% : 19358.326us 00:08:44.331 95.00000% : 19862.449us 00:08:44.331 98.00000% : 20568.222us 00:08:44.331 99.00000% : 21173.169us 00:08:44.331 99.50000% : 26416.049us 00:08:44.331 99.90000% : 27222.646us 00:08:44.331 99.99000% : 27424.295us 00:08:44.331 99.99900% : 27424.295us 00:08:44.331 99.99990% : 27424.295us 00:08:44.331 99.99999% : 27424.295us 00:08:44.331 00:08:44.331 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:08:44.331 ============================================================================== 00:08:44.331 Range in us Cumulative IO count 00:08:44.331 9729.575 - 9779.988: 0.0135% ( 1) 00:08:44.331 9981.637 - 10032.049: 0.0539% ( 3) 00:08:44.331 10032.049 - 10082.462: 0.1212% ( 5) 00:08:44.331 10082.462 - 10132.874: 0.1751% ( 4) 00:08:44.331 10132.874 - 10183.286: 0.2963% ( 9) 00:08:44.331 10183.286 - 10233.698: 0.3906% ( 7) 00:08:44.331 10233.698 - 10284.111: 0.5792% ( 14) 00:08:44.331 10284.111 - 10334.523: 0.7139% ( 10) 00:08:44.331 10334.523 - 10384.935: 0.8217% ( 8) 00:08:44.331 10384.935 - 10435.348: 0.9294% ( 8) 00:08:44.331 10435.348 - 10485.760: 1.0506% ( 9) 00:08:44.331 10485.760 - 10536.172: 1.1719% ( 9) 00:08:44.331 10536.172 - 10586.585: 1.4817% ( 23) 00:08:44.331 10586.585 - 10636.997: 1.5760% ( 7) 00:08:44.331 10636.997 - 10687.409: 1.6837% ( 8) 00:08:44.331 10687.409 - 10737.822: 1.8992% ( 16) 00:08:44.331 10737.822 - 10788.234: 2.2629% ( 27) 00:08:44.331 10788.234 - 10838.646: 2.4784% ( 16) 00:08:44.331 10838.646 - 10889.058: 3.0307% ( 41) 00:08:44.331 10889.058 - 10939.471: 3.4079% ( 28) 00:08:44.331 10939.471 - 10989.883: 3.7581% ( 26) 00:08:44.331 10989.883 - 11040.295: 4.0544% ( 22) 00:08:44.331 11040.295 - 11090.708: 4.3777% ( 24) 00:08:44.331 11090.708 - 11141.120: 4.5528% ( 13) 00:08:44.331 11141.120 - 11191.532: 4.7683% ( 16) 00:08:44.331 11191.532 - 11241.945: 4.9838% ( 16) 00:08:44.331 11241.945 - 11292.357: 5.1589% ( 13) 00:08:44.331 11292.357 - 11342.769: 5.3206% ( 12) 00:08:44.331 11342.769 - 11393.182: 5.5092% ( 14) 00:08:44.331 11393.182 - 11443.594: 5.7651% ( 19) 00:08:44.331 11443.594 - 11494.006: 5.9267% ( 12) 00:08:44.331 11494.006 - 11544.418: 6.1153% ( 14) 00:08:44.331 11544.418 - 11594.831: 6.4251% ( 23) 00:08:44.331 11594.831 - 11645.243: 6.6137% ( 14) 00:08:44.331 11645.243 - 11695.655: 6.7753% ( 12) 00:08:44.331 11695.655 - 11746.068: 6.8966% ( 9) 00:08:44.331 11746.068 - 11796.480: 7.0447% ( 11) 00:08:44.331 11796.480 - 11846.892: 7.2468% ( 15) 00:08:44.331 11846.892 - 11897.305: 7.4353% ( 14) 00:08:44.331 11897.305 - 11947.717: 7.6778% ( 18) 00:08:44.331 11947.717 - 11998.129: 7.9472% ( 20) 00:08:44.331 11998.129 - 12048.542: 8.4052% ( 34) 00:08:44.331 12048.542 - 12098.954: 8.7419% ( 25) 00:08:44.331 12098.954 - 12149.366: 9.0921% ( 26) 00:08:44.331 12149.366 - 12199.778: 9.3885% ( 22) 00:08:44.331 12199.778 - 12250.191: 9.5905% ( 15) 00:08:44.331 12250.191 - 12300.603: 9.7656% ( 13) 00:08:44.331 12300.603 - 12351.015: 9.9542% ( 14) 00:08:44.331 12351.015 - 12401.428: 10.0485% ( 7) 00:08:44.331 12401.428 - 12451.840: 10.1832% ( 10) 00:08:44.331 12451.840 - 12502.252: 10.2909% ( 8) 00:08:44.331 12502.252 - 12552.665: 10.3852% ( 7) 00:08:44.331 12552.665 - 12603.077: 10.4526% ( 5) 00:08:44.331 12603.077 - 12653.489: 10.5603% ( 8) 00:08:44.331 12653.489 - 12703.902: 10.6412% ( 6) 00:08:44.331 12703.902 - 12754.314: 10.7624% ( 9) 00:08:44.331 12754.314 - 12804.726: 10.9106% ( 11) 00:08:44.331 12804.726 - 12855.138: 11.0722% ( 12) 00:08:44.331 12855.138 - 12905.551: 11.2742% ( 15) 00:08:44.331 12905.551 - 13006.375: 11.8265% ( 41) 00:08:44.331 13006.375 - 13107.200: 12.3788% ( 41) 00:08:44.331 13107.200 - 13208.025: 12.9176% ( 40) 00:08:44.331 13208.025 - 13308.849: 13.5776% ( 49) 00:08:44.331 13308.849 - 13409.674: 14.4127% ( 62) 00:08:44.331 13409.674 - 13510.498: 15.0458% ( 47) 00:08:44.331 13510.498 - 13611.323: 15.8270% ( 58) 00:08:44.331 13611.323 - 13712.148: 16.3524% ( 39) 00:08:44.331 13712.148 - 13812.972: 16.8103% ( 34) 00:08:44.331 13812.972 - 13913.797: 17.5377% ( 54) 00:08:44.331 13913.797 - 14014.622: 17.9553% ( 31) 00:08:44.331 14014.622 - 14115.446: 18.3324% ( 28) 00:08:44.331 14115.446 - 14216.271: 18.8847% ( 41) 00:08:44.331 14216.271 - 14317.095: 19.6121% ( 54) 00:08:44.331 14317.095 - 14417.920: 19.9623% ( 26) 00:08:44.331 14417.920 - 14518.745: 20.2586% ( 22) 00:08:44.331 14518.745 - 14619.569: 20.6358% ( 28) 00:08:44.331 14619.569 - 14720.394: 20.9052% ( 20) 00:08:44.331 14720.394 - 14821.218: 21.1072% ( 15) 00:08:44.331 14821.218 - 14922.043: 21.3227% ( 16) 00:08:44.331 14922.043 - 15022.868: 21.4574% ( 10) 00:08:44.331 15022.868 - 15123.692: 21.5113% ( 4) 00:08:44.331 15123.692 - 15224.517: 21.5517% ( 3) 00:08:44.331 16131.938 - 16232.763: 21.6191% ( 5) 00:08:44.331 16232.763 - 16333.588: 21.6999% ( 6) 00:08:44.331 16333.588 - 16434.412: 21.8481% ( 11) 00:08:44.331 16434.412 - 16535.237: 21.9558% ( 8) 00:08:44.331 16535.237 - 16636.062: 22.2117% ( 19) 00:08:44.331 16636.062 - 16736.886: 22.5350% ( 24) 00:08:44.331 16736.886 - 16837.711: 22.8448% ( 23) 00:08:44.331 16837.711 - 16938.535: 23.1816% ( 25) 00:08:44.331 16938.535 - 17039.360: 23.9359% ( 56) 00:08:44.331 17039.360 - 17140.185: 24.9192% ( 73) 00:08:44.331 17140.185 - 17241.009: 26.2796% ( 101) 00:08:44.331 17241.009 - 17341.834: 28.0981% ( 135) 00:08:44.331 17341.834 - 17442.658: 30.1185% ( 150) 00:08:44.331 17442.658 - 17543.483: 32.7721% ( 197) 00:08:44.331 17543.483 - 17644.308: 35.5469% ( 206) 00:08:44.331 17644.308 - 17745.132: 38.8066% ( 242) 00:08:44.331 17745.132 - 17845.957: 42.8206% ( 298) 00:08:44.331 17845.957 - 17946.782: 46.4305% ( 268) 00:08:44.331 17946.782 - 18047.606: 51.0102% ( 340) 00:08:44.331 18047.606 - 18148.431: 55.1185% ( 305) 00:08:44.331 18148.431 - 18249.255: 58.5264% ( 253) 00:08:44.331 18249.255 - 18350.080: 62.2575% ( 277) 00:08:44.331 18350.080 - 18450.905: 65.6923% ( 255) 00:08:44.331 18450.905 - 18551.729: 69.4908% ( 282) 00:08:44.331 18551.729 - 18652.554: 73.3028% ( 283) 00:08:44.331 18652.554 - 18753.378: 76.8992% ( 267) 00:08:44.331 18753.378 - 18854.203: 79.2161% ( 172) 00:08:44.331 18854.203 - 18955.028: 81.5059% ( 170) 00:08:44.331 18955.028 - 19055.852: 83.9574% ( 182) 00:08:44.331 19055.852 - 19156.677: 85.8297% ( 139) 00:08:44.331 19156.677 - 19257.502: 87.7963% ( 146) 00:08:44.331 19257.502 - 19358.326: 89.2915% ( 111) 00:08:44.331 19358.326 - 19459.151: 90.5711% ( 95) 00:08:44.331 19459.151 - 19559.975: 91.7430% ( 87) 00:08:44.331 19559.975 - 19660.800: 92.6320% ( 66) 00:08:44.331 19660.800 - 19761.625: 93.4402% ( 60) 00:08:44.331 19761.625 - 19862.449: 94.2484% ( 60) 00:08:44.331 19862.449 - 19963.274: 94.8680% ( 46) 00:08:44.331 19963.274 - 20064.098: 95.2182% ( 26) 00:08:44.331 20064.098 - 20164.923: 95.6358% ( 31) 00:08:44.331 20164.923 - 20265.748: 96.0803% ( 33) 00:08:44.331 20265.748 - 20366.572: 96.4844% ( 30) 00:08:44.331 20366.572 - 20467.397: 96.9558% ( 35) 00:08:44.331 20467.397 - 20568.222: 97.5620% ( 45) 00:08:44.331 20568.222 - 20669.046: 97.7775% ( 16) 00:08:44.331 20669.046 - 20769.871: 97.9795% ( 15) 00:08:44.331 20769.871 - 20870.695: 98.0738% ( 7) 00:08:44.331 20870.695 - 20971.520: 98.1681% ( 7) 00:08:44.331 20971.520 - 21072.345: 98.1950% ( 2) 00:08:44.331 21072.345 - 21173.169: 98.2355% ( 3) 00:08:44.331 21173.169 - 21273.994: 98.2759% ( 3) 00:08:44.331 30449.034 - 30650.683: 98.3297% ( 4) 00:08:44.331 30650.683 - 30852.332: 98.6126% ( 21) 00:08:44.331 30852.332 - 31053.982: 98.7204% ( 8) 00:08:44.331 31053.982 - 31255.631: 98.8281% ( 8) 00:08:44.331 31255.631 - 31457.280: 98.9359% ( 8) 00:08:44.331 31457.280 - 31658.929: 99.0436% ( 8) 00:08:44.331 31658.929 - 31860.578: 99.1245% ( 6) 00:08:44.331 31860.578 - 32062.228: 99.1379% ( 1) 00:08:44.331 37305.108 - 37506.757: 99.2861% ( 11) 00:08:44.331 37506.757 - 37708.406: 99.4612% ( 13) 00:08:44.331 38515.003 - 38716.652: 99.5555% ( 7) 00:08:44.331 38716.652 - 38918.302: 99.6633% ( 8) 00:08:44.331 38918.302 - 39119.951: 99.7845% ( 9) 00:08:44.331 39119.951 - 39321.600: 99.8922% ( 8) 00:08:44.331 39321.600 - 39523.249: 99.9865% ( 7) 00:08:44.331 39523.249 - 39724.898: 100.0000% ( 1) 00:08:44.331 00:08:44.331 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:08:44.331 ============================================================================== 00:08:44.332 Range in us Cumulative IO count 00:08:44.332 9527.926 - 9578.338: 0.0135% ( 1) 00:08:44.332 9679.163 - 9729.575: 0.0404% ( 2) 00:08:44.332 9729.575 - 9779.988: 0.0808% ( 3) 00:08:44.332 9779.988 - 9830.400: 0.1347% ( 4) 00:08:44.332 9830.400 - 9880.812: 0.2155% ( 6) 00:08:44.332 9880.812 - 9931.225: 0.3367% ( 9) 00:08:44.332 9931.225 - 9981.637: 0.4176% ( 6) 00:08:44.332 9981.637 - 10032.049: 0.4580% ( 3) 00:08:44.332 10032.049 - 10082.462: 0.4984% ( 3) 00:08:44.332 10082.462 - 10132.874: 0.5253% ( 2) 00:08:44.332 10132.874 - 10183.286: 0.5657% ( 3) 00:08:44.332 10233.698 - 10284.111: 0.6466% ( 6) 00:08:44.332 10284.111 - 10334.523: 0.7947% ( 11) 00:08:44.332 10334.523 - 10384.935: 0.9025% ( 8) 00:08:44.332 10384.935 - 10435.348: 1.0372% ( 10) 00:08:44.332 10435.348 - 10485.760: 1.1449% ( 8) 00:08:44.332 10485.760 - 10536.172: 1.3066% ( 12) 00:08:44.332 10536.172 - 10586.585: 1.5086% ( 15) 00:08:44.332 10586.585 - 10636.997: 1.7915% ( 21) 00:08:44.332 10636.997 - 10687.409: 2.0070% ( 16) 00:08:44.332 10687.409 - 10737.822: 2.1821% ( 13) 00:08:44.332 10737.822 - 10788.234: 2.4515% ( 20) 00:08:44.332 10788.234 - 10838.646: 2.6670% ( 16) 00:08:44.332 10838.646 - 10889.058: 2.9634% ( 22) 00:08:44.332 10889.058 - 10939.471: 3.2058% ( 18) 00:08:44.332 10939.471 - 10989.883: 3.5964% ( 29) 00:08:44.332 10989.883 - 11040.295: 3.7985% ( 15) 00:08:44.332 11040.295 - 11090.708: 4.0409% ( 18) 00:08:44.332 11090.708 - 11141.120: 4.1891% ( 11) 00:08:44.332 11141.120 - 11191.532: 4.3912% ( 15) 00:08:44.332 11191.532 - 11241.945: 4.5393% ( 11) 00:08:44.332 11241.945 - 11292.357: 4.7683% ( 17) 00:08:44.332 11292.357 - 11342.769: 5.2398% ( 35) 00:08:44.332 11342.769 - 11393.182: 5.6169% ( 28) 00:08:44.332 11393.182 - 11443.594: 5.8459% ( 17) 00:08:44.332 11443.594 - 11494.006: 6.0614% ( 16) 00:08:44.332 11494.006 - 11544.418: 6.2500% ( 14) 00:08:44.332 11544.418 - 11594.831: 6.4790% ( 17) 00:08:44.332 11594.831 - 11645.243: 6.6272% ( 11) 00:08:44.332 11645.243 - 11695.655: 6.8966% ( 20) 00:08:44.332 11695.655 - 11746.068: 7.2333% ( 25) 00:08:44.332 11746.068 - 11796.480: 7.4488% ( 16) 00:08:44.332 11796.480 - 11846.892: 7.7452% ( 22) 00:08:44.332 11846.892 - 11897.305: 7.9741% ( 17) 00:08:44.332 11897.305 - 11947.717: 8.2166% ( 18) 00:08:44.332 11947.717 - 11998.129: 8.4321% ( 16) 00:08:44.332 11998.129 - 12048.542: 8.6476% ( 16) 00:08:44.332 12048.542 - 12098.954: 8.8631% ( 16) 00:08:44.332 12098.954 - 12149.366: 9.1191% ( 19) 00:08:44.332 12149.366 - 12199.778: 9.3885% ( 20) 00:08:44.332 12199.778 - 12250.191: 9.5501% ( 12) 00:08:44.332 12250.191 - 12300.603: 9.7926% ( 18) 00:08:44.332 12300.603 - 12351.015: 9.9811% ( 14) 00:08:44.332 12351.015 - 12401.428: 10.1697% ( 14) 00:08:44.332 12401.428 - 12451.840: 10.2775% ( 8) 00:08:44.332 12451.840 - 12502.252: 10.4661% ( 14) 00:08:44.332 12502.252 - 12552.665: 10.6277% ( 12) 00:08:44.332 12552.665 - 12603.077: 10.7624% ( 10) 00:08:44.332 12603.077 - 12653.489: 10.9240% ( 12) 00:08:44.332 12653.489 - 12703.902: 11.0722% ( 11) 00:08:44.332 12703.902 - 12754.314: 11.3551% ( 21) 00:08:44.332 12754.314 - 12804.726: 11.5975% ( 18) 00:08:44.332 12804.726 - 12855.138: 11.8130% ( 16) 00:08:44.332 12855.138 - 12905.551: 12.0151% ( 15) 00:08:44.332 12905.551 - 13006.375: 12.5000% ( 36) 00:08:44.332 13006.375 - 13107.200: 12.9580% ( 34) 00:08:44.332 13107.200 - 13208.025: 13.4159% ( 34) 00:08:44.332 13208.025 - 13308.849: 13.8874% ( 35) 00:08:44.332 13308.849 - 13409.674: 14.2645% ( 28) 00:08:44.332 13409.674 - 13510.498: 14.6282% ( 27) 00:08:44.332 13510.498 - 13611.323: 15.1131% ( 36) 00:08:44.332 13611.323 - 13712.148: 15.7328% ( 46) 00:08:44.332 13712.148 - 13812.972: 16.1907% ( 34) 00:08:44.332 13812.972 - 13913.797: 16.7295% ( 40) 00:08:44.332 13913.797 - 14014.622: 17.3761% ( 48) 00:08:44.332 14014.622 - 14115.446: 18.0361% ( 49) 00:08:44.332 14115.446 - 14216.271: 18.4267% ( 29) 00:08:44.332 14216.271 - 14317.095: 19.0598% ( 47) 00:08:44.332 14317.095 - 14417.920: 19.6121% ( 41) 00:08:44.332 14417.920 - 14518.745: 19.8141% ( 15) 00:08:44.332 14518.745 - 14619.569: 20.1374% ( 24) 00:08:44.332 14619.569 - 14720.394: 20.3125% ( 13) 00:08:44.332 14720.394 - 14821.218: 20.5145% ( 15) 00:08:44.332 14821.218 - 14922.043: 20.7570% ( 18) 00:08:44.332 14922.043 - 15022.868: 20.9591% ( 15) 00:08:44.332 15022.868 - 15123.692: 21.1880% ( 17) 00:08:44.332 15123.692 - 15224.517: 21.3766% ( 14) 00:08:44.332 15224.517 - 15325.342: 21.4574% ( 6) 00:08:44.332 15325.342 - 15426.166: 21.4978% ( 3) 00:08:44.332 15426.166 - 15526.991: 21.6325% ( 10) 00:08:44.332 15526.991 - 15627.815: 21.7538% ( 9) 00:08:44.332 15627.815 - 15728.640: 21.8077% ( 4) 00:08:44.332 15728.640 - 15829.465: 21.8750% ( 5) 00:08:44.332 15829.465 - 15930.289: 21.8885% ( 1) 00:08:44.332 15930.289 - 16031.114: 21.9154% ( 2) 00:08:44.332 16031.114 - 16131.938: 21.9558% ( 3) 00:08:44.332 16131.938 - 16232.763: 22.0501% ( 7) 00:08:44.332 16232.763 - 16333.588: 22.1848% ( 10) 00:08:44.332 16333.588 - 16434.412: 22.3869% ( 15) 00:08:44.332 16434.412 - 16535.237: 22.6697% ( 21) 00:08:44.332 16535.237 - 16636.062: 23.2085% ( 40) 00:08:44.332 16636.062 - 16736.886: 24.1514% ( 70) 00:08:44.332 16736.886 - 16837.711: 25.5523% ( 104) 00:08:44.332 16837.711 - 16938.535: 27.6266% ( 154) 00:08:44.332 16938.535 - 17039.360: 29.4989% ( 139) 00:08:44.332 17039.360 - 17140.185: 31.6406% ( 159) 00:08:44.332 17140.185 - 17241.009: 34.1460% ( 186) 00:08:44.332 17241.009 - 17341.834: 36.5302% ( 177) 00:08:44.332 17341.834 - 17442.658: 39.4531% ( 217) 00:08:44.332 17442.658 - 17543.483: 42.3761% ( 217) 00:08:44.332 17543.483 - 17644.308: 45.4203% ( 226) 00:08:44.332 17644.308 - 17745.132: 48.3836% ( 220) 00:08:44.332 17745.132 - 17845.957: 51.1180% ( 203) 00:08:44.332 17845.957 - 17946.782: 53.4079% ( 170) 00:08:44.332 17946.782 - 18047.606: 55.4149% ( 149) 00:08:44.332 18047.606 - 18148.431: 57.3276% ( 142) 00:08:44.332 18148.431 - 18249.255: 59.3481% ( 150) 00:08:44.332 18249.255 - 18350.080: 61.3820% ( 151) 00:08:44.332 18350.080 - 18450.905: 63.1870% ( 134) 00:08:44.332 18450.905 - 18551.729: 65.2209% ( 151) 00:08:44.332 18551.729 - 18652.554: 67.3357% ( 157) 00:08:44.332 18652.554 - 18753.378: 69.4774% ( 159) 00:08:44.332 18753.378 - 18854.203: 72.1713% ( 200) 00:08:44.332 18854.203 - 18955.028: 74.0571% ( 140) 00:08:44.332 18955.028 - 19055.852: 76.0372% ( 147) 00:08:44.332 19055.852 - 19156.677: 77.9634% ( 143) 00:08:44.332 19156.677 - 19257.502: 80.7112% ( 204) 00:08:44.332 19257.502 - 19358.326: 82.9068% ( 163) 00:08:44.332 19358.326 - 19459.151: 84.9811% ( 154) 00:08:44.332 19459.151 - 19559.975: 86.7322% ( 130) 00:08:44.332 19559.975 - 19660.800: 88.5776% ( 137) 00:08:44.332 19660.800 - 19761.625: 89.9784% ( 104) 00:08:44.332 19761.625 - 19862.449: 91.1773% ( 89) 00:08:44.332 19862.449 - 19963.274: 92.2953% ( 83) 00:08:44.332 19963.274 - 20064.098: 93.1169% ( 61) 00:08:44.332 20064.098 - 20164.923: 93.8847% ( 57) 00:08:44.332 20164.923 - 20265.748: 95.1239% ( 92) 00:08:44.332 20265.748 - 20366.572: 95.5280% ( 30) 00:08:44.332 20366.572 - 20467.397: 95.8513% ( 24) 00:08:44.332 20467.397 - 20568.222: 96.3093% ( 34) 00:08:44.332 20568.222 - 20669.046: 96.6730% ( 27) 00:08:44.332 20669.046 - 20769.871: 96.9828% ( 23) 00:08:44.332 20769.871 - 20870.695: 97.1983% ( 16) 00:08:44.332 20870.695 - 20971.520: 97.3734% ( 13) 00:08:44.332 20971.520 - 21072.345: 97.4542% ( 6) 00:08:44.332 21072.345 - 21173.169: 97.5889% ( 10) 00:08:44.332 21173.169 - 21273.994: 97.6697% ( 6) 00:08:44.332 21273.994 - 21374.818: 97.7909% ( 9) 00:08:44.332 21374.818 - 21475.643: 97.8718% ( 6) 00:08:44.332 21475.643 - 21576.468: 98.0065% ( 10) 00:08:44.332 21576.468 - 21677.292: 98.0603% ( 4) 00:08:44.332 21677.292 - 21778.117: 98.0738% ( 1) 00:08:44.332 21778.117 - 21878.942: 98.1681% ( 7) 00:08:44.332 21878.942 - 21979.766: 98.2085% ( 3) 00:08:44.332 21979.766 - 22080.591: 98.2624% ( 4) 00:08:44.332 22080.591 - 22181.415: 98.2759% ( 1) 00:08:44.332 28634.191 - 28835.840: 98.3297% ( 4) 00:08:44.332 28835.840 - 29037.489: 98.4375% ( 8) 00:08:44.332 29037.489 - 29239.138: 98.5318% ( 7) 00:08:44.332 29239.138 - 29440.788: 98.6261% ( 7) 00:08:44.332 29440.788 - 29642.437: 98.7338% ( 8) 00:08:44.332 29642.437 - 29844.086: 98.8147% ( 6) 00:08:44.332 29844.086 - 30045.735: 98.9359% ( 9) 00:08:44.332 30045.735 - 30247.385: 98.9898% ( 4) 00:08:44.332 30247.385 - 30449.034: 99.1110% ( 9) 00:08:44.332 30449.034 - 30650.683: 99.1379% ( 2) 00:08:44.332 36498.511 - 36700.160: 99.1649% ( 2) 00:08:44.332 36700.160 - 36901.809: 99.2726% ( 8) 00:08:44.332 36901.809 - 37103.458: 99.3669% ( 7) 00:08:44.332 37103.458 - 37305.108: 99.4612% ( 7) 00:08:44.332 37305.108 - 37506.757: 99.5824% ( 9) 00:08:44.332 37506.757 - 37708.406: 99.6633% ( 6) 00:08:44.332 37708.406 - 37910.055: 99.7575% ( 7) 00:08:44.332 37910.055 - 38111.705: 99.8653% ( 8) 00:08:44.332 38111.705 - 38313.354: 99.9731% ( 8) 00:08:44.332 38313.354 - 38515.003: 100.0000% ( 2) 00:08:44.332 00:08:44.332 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:08:44.332 ============================================================================== 00:08:44.332 Range in us Cumulative IO count 00:08:44.332 9477.514 - 9527.926: 0.0135% ( 1) 00:08:44.332 9679.163 - 9729.575: 0.0269% ( 1) 00:08:44.332 9729.575 - 9779.988: 0.0404% ( 1) 00:08:44.333 9779.988 - 9830.400: 0.1212% ( 6) 00:08:44.333 9830.400 - 9880.812: 0.1751% ( 4) 00:08:44.333 9880.812 - 9931.225: 0.2694% ( 7) 00:08:44.333 9931.225 - 9981.637: 0.3772% ( 8) 00:08:44.333 9981.637 - 10032.049: 0.5388% ( 12) 00:08:44.333 10032.049 - 10082.462: 0.6735% ( 10) 00:08:44.333 10082.462 - 10132.874: 0.8217% ( 11) 00:08:44.333 10132.874 - 10183.286: 0.9564% ( 10) 00:08:44.333 10183.286 - 10233.698: 1.0641% ( 8) 00:08:44.333 10233.698 - 10284.111: 1.1315% ( 5) 00:08:44.333 10284.111 - 10334.523: 1.2123% ( 6) 00:08:44.333 10334.523 - 10384.935: 1.3066% ( 7) 00:08:44.333 10384.935 - 10435.348: 1.3874% ( 6) 00:08:44.333 10435.348 - 10485.760: 1.4682% ( 6) 00:08:44.333 10485.760 - 10536.172: 1.5760% ( 8) 00:08:44.333 10536.172 - 10586.585: 1.6837% ( 8) 00:08:44.333 10586.585 - 10636.997: 1.8454% ( 12) 00:08:44.333 10636.997 - 10687.409: 1.9531% ( 8) 00:08:44.333 10687.409 - 10737.822: 2.0744% ( 9) 00:08:44.333 10737.822 - 10788.234: 2.3033% ( 17) 00:08:44.333 10788.234 - 10838.646: 2.5323% ( 17) 00:08:44.333 10838.646 - 10889.058: 2.8421% ( 23) 00:08:44.333 10889.058 - 10939.471: 3.1789% ( 25) 00:08:44.333 10939.471 - 10989.883: 3.4483% ( 20) 00:08:44.333 10989.883 - 11040.295: 3.6773% ( 17) 00:08:44.333 11040.295 - 11090.708: 3.9332% ( 19) 00:08:44.333 11090.708 - 11141.120: 4.2026% ( 20) 00:08:44.333 11141.120 - 11191.532: 4.4855% ( 21) 00:08:44.333 11191.532 - 11241.945: 4.8357% ( 26) 00:08:44.333 11241.945 - 11292.357: 5.0916% ( 19) 00:08:44.333 11292.357 - 11342.769: 5.4553% ( 27) 00:08:44.333 11342.769 - 11393.182: 5.7920% ( 25) 00:08:44.333 11393.182 - 11443.594: 6.0210% ( 17) 00:08:44.333 11443.594 - 11494.006: 6.2096% ( 14) 00:08:44.333 11494.006 - 11544.418: 6.4251% ( 16) 00:08:44.333 11544.418 - 11594.831: 6.6137% ( 14) 00:08:44.333 11594.831 - 11645.243: 6.6945% ( 6) 00:08:44.333 11645.243 - 11695.655: 6.7619% ( 5) 00:08:44.333 11695.655 - 11746.068: 6.8292% ( 5) 00:08:44.333 11746.068 - 11796.480: 6.9370% ( 8) 00:08:44.333 11796.480 - 11846.892: 7.0582% ( 9) 00:08:44.333 11846.892 - 11897.305: 7.2064% ( 11) 00:08:44.333 11897.305 - 11947.717: 7.3815% ( 13) 00:08:44.333 11947.717 - 11998.129: 7.5431% ( 12) 00:08:44.333 11998.129 - 12048.542: 7.7317% ( 14) 00:08:44.333 12048.542 - 12098.954: 7.8933% ( 12) 00:08:44.333 12098.954 - 12149.366: 8.0684% ( 13) 00:08:44.333 12149.366 - 12199.778: 8.2705% ( 15) 00:08:44.333 12199.778 - 12250.191: 8.5668% ( 22) 00:08:44.333 12250.191 - 12300.603: 8.9305% ( 27) 00:08:44.333 12300.603 - 12351.015: 9.3885% ( 34) 00:08:44.333 12351.015 - 12401.428: 9.7656% ( 28) 00:08:44.333 12401.428 - 12451.840: 10.1428% ( 28) 00:08:44.333 12451.840 - 12502.252: 10.4122% ( 20) 00:08:44.333 12502.252 - 12552.665: 10.6546% ( 18) 00:08:44.333 12552.665 - 12603.077: 11.0048% ( 26) 00:08:44.333 12603.077 - 12653.489: 11.2742% ( 20) 00:08:44.333 12653.489 - 12703.902: 11.6514% ( 28) 00:08:44.333 12703.902 - 12754.314: 12.0016% ( 26) 00:08:44.333 12754.314 - 12804.726: 12.3788% ( 28) 00:08:44.333 12804.726 - 12855.138: 12.8233% ( 33) 00:08:44.333 12855.138 - 12905.551: 13.0927% ( 20) 00:08:44.333 12905.551 - 13006.375: 13.5776% ( 36) 00:08:44.333 13006.375 - 13107.200: 13.9278% ( 26) 00:08:44.333 13107.200 - 13208.025: 14.1568% ( 17) 00:08:44.333 13208.025 - 13308.849: 14.4127% ( 19) 00:08:44.333 13308.849 - 13409.674: 14.6821% ( 20) 00:08:44.333 13409.674 - 13510.498: 14.9919% ( 23) 00:08:44.333 13510.498 - 13611.323: 15.3152% ( 24) 00:08:44.333 13611.323 - 13712.148: 15.5981% ( 21) 00:08:44.333 13712.148 - 13812.972: 15.8675% ( 20) 00:08:44.333 13812.972 - 13913.797: 16.0560% ( 14) 00:08:44.333 13913.797 - 14014.622: 16.3658% ( 23) 00:08:44.333 14014.622 - 14115.446: 16.9316% ( 42) 00:08:44.333 14115.446 - 14216.271: 17.6185% ( 51) 00:08:44.333 14216.271 - 14317.095: 17.9822% ( 27) 00:08:44.333 14317.095 - 14417.920: 18.3728% ( 29) 00:08:44.333 14417.920 - 14518.745: 18.8443% ( 35) 00:08:44.333 14518.745 - 14619.569: 19.2888% ( 33) 00:08:44.333 14619.569 - 14720.394: 19.9353% ( 48) 00:08:44.333 14720.394 - 14821.218: 20.2721% ( 25) 00:08:44.333 14821.218 - 14922.043: 20.4876% ( 16) 00:08:44.333 14922.043 - 15022.868: 20.6223% ( 10) 00:08:44.333 15022.868 - 15123.692: 20.7435% ( 9) 00:08:44.333 15123.692 - 15224.517: 20.8648% ( 9) 00:08:44.333 15224.517 - 15325.342: 20.9860% ( 9) 00:08:44.333 15325.342 - 15426.166: 21.0938% ( 8) 00:08:44.333 15426.166 - 15526.991: 21.1342% ( 3) 00:08:44.333 15526.991 - 15627.815: 21.2015% ( 5) 00:08:44.333 15627.815 - 15728.640: 21.3362% ( 10) 00:08:44.333 15728.640 - 15829.465: 21.5113% ( 13) 00:08:44.333 15829.465 - 15930.289: 21.6460% ( 10) 00:08:44.333 15930.289 - 16031.114: 21.7268% ( 6) 00:08:44.333 16031.114 - 16131.938: 22.0366% ( 23) 00:08:44.333 16131.938 - 16232.763: 22.1983% ( 12) 00:08:44.333 16232.763 - 16333.588: 22.2252% ( 2) 00:08:44.333 16333.588 - 16434.412: 22.2791% ( 4) 00:08:44.333 16434.412 - 16535.237: 22.3060% ( 2) 00:08:44.333 16535.237 - 16636.062: 22.5216% ( 16) 00:08:44.333 16636.062 - 16736.886: 22.8583% ( 25) 00:08:44.333 16736.886 - 16837.711: 23.2624% ( 30) 00:08:44.333 16837.711 - 16938.535: 23.9224% ( 49) 00:08:44.333 16938.535 - 17039.360: 24.6498% ( 54) 00:08:44.333 17039.360 - 17140.185: 25.7947% ( 85) 00:08:44.333 17140.185 - 17241.009: 27.2091% ( 105) 00:08:44.333 17241.009 - 17341.834: 28.9871% ( 132) 00:08:44.333 17341.834 - 17442.658: 31.3173% ( 173) 00:08:44.333 17442.658 - 17543.483: 33.9978% ( 199) 00:08:44.333 17543.483 - 17644.308: 36.8400% ( 211) 00:08:44.333 17644.308 - 17745.132: 39.6417% ( 208) 00:08:44.333 17745.132 - 17845.957: 43.4941% ( 286) 00:08:44.333 17845.957 - 17946.782: 47.5216% ( 299) 00:08:44.333 17946.782 - 18047.606: 51.7511% ( 314) 00:08:44.333 18047.606 - 18148.431: 55.6843% ( 292) 00:08:44.333 18148.431 - 18249.255: 59.4693% ( 281) 00:08:44.333 18249.255 - 18350.080: 62.7425% ( 243) 00:08:44.333 18350.080 - 18450.905: 66.0426% ( 245) 00:08:44.333 18450.905 - 18551.729: 69.3023% ( 242) 00:08:44.333 18551.729 - 18652.554: 73.0334% ( 277) 00:08:44.333 18652.554 - 18753.378: 75.9564% ( 217) 00:08:44.333 18753.378 - 18854.203: 78.6099% ( 197) 00:08:44.333 18854.203 - 18955.028: 80.9402% ( 173) 00:08:44.333 18955.028 - 19055.852: 83.4995% ( 190) 00:08:44.333 19055.852 - 19156.677: 85.4661% ( 146) 00:08:44.333 19156.677 - 19257.502: 87.1094% ( 122) 00:08:44.333 19257.502 - 19358.326: 88.4564% ( 100) 00:08:44.333 19358.326 - 19459.151: 89.7091% ( 93) 00:08:44.333 19459.151 - 19559.975: 90.8809% ( 87) 00:08:44.333 19559.975 - 19660.800: 91.9046% ( 76) 00:08:44.333 19660.800 - 19761.625: 92.8610% ( 71) 00:08:44.333 19761.625 - 19862.449: 93.5749% ( 53) 00:08:44.333 19862.449 - 19963.274: 94.2080% ( 47) 00:08:44.333 19963.274 - 20064.098: 94.6794% ( 35) 00:08:44.333 20064.098 - 20164.923: 95.0970% ( 31) 00:08:44.333 20164.923 - 20265.748: 95.5280% ( 32) 00:08:44.333 20265.748 - 20366.572: 95.8648% ( 25) 00:08:44.333 20366.572 - 20467.397: 96.4978% ( 47) 00:08:44.333 20467.397 - 20568.222: 96.8346% ( 25) 00:08:44.333 20568.222 - 20669.046: 97.0770% ( 18) 00:08:44.333 20669.046 - 20769.871: 97.2656% ( 14) 00:08:44.333 20769.871 - 20870.695: 97.4407% ( 13) 00:08:44.333 20870.695 - 20971.520: 97.5620% ( 9) 00:08:44.333 20971.520 - 21072.345: 97.7101% ( 11) 00:08:44.333 21072.345 - 21173.169: 97.8314% ( 9) 00:08:44.333 21173.169 - 21273.994: 97.9526% ( 9) 00:08:44.333 21273.994 - 21374.818: 98.0469% ( 7) 00:08:44.333 21374.818 - 21475.643: 98.1142% ( 5) 00:08:44.333 21475.643 - 21576.468: 98.1816% ( 5) 00:08:44.333 21576.468 - 21677.292: 98.2220% ( 3) 00:08:44.333 21677.292 - 21778.117: 98.2489% ( 2) 00:08:44.333 21778.117 - 21878.942: 98.2759% ( 2) 00:08:44.333 27222.646 - 27424.295: 98.3702% ( 7) 00:08:44.333 27424.295 - 27625.945: 98.4779% ( 8) 00:08:44.333 27625.945 - 27827.594: 98.5857% ( 8) 00:08:44.333 27827.594 - 28029.243: 98.6934% ( 8) 00:08:44.333 28029.243 - 28230.892: 98.8012% ( 8) 00:08:44.333 28230.892 - 28432.542: 98.9089% ( 8) 00:08:44.333 28432.542 - 28634.191: 99.0302% ( 9) 00:08:44.333 28634.191 - 28835.840: 99.1379% ( 8) 00:08:44.333 35086.966 - 35288.615: 99.2322% ( 7) 00:08:44.333 35288.615 - 35490.265: 99.3265% ( 7) 00:08:44.333 35490.265 - 35691.914: 99.4343% ( 8) 00:08:44.333 35691.914 - 35893.563: 99.5420% ( 8) 00:08:44.333 35893.563 - 36095.212: 99.6498% ( 8) 00:08:44.333 36095.212 - 36296.862: 99.7575% ( 8) 00:08:44.333 36296.862 - 36498.511: 99.8788% ( 9) 00:08:44.333 36498.511 - 36700.160: 99.9865% ( 8) 00:08:44.333 36700.160 - 36901.809: 100.0000% ( 1) 00:08:44.333 00:08:44.333 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:08:44.333 ============================================================================== 00:08:44.333 Range in us Cumulative IO count 00:08:44.333 9830.400 - 9880.812: 0.0269% ( 2) 00:08:44.333 9880.812 - 9931.225: 0.0943% ( 5) 00:08:44.333 9931.225 - 9981.637: 0.1616% ( 5) 00:08:44.333 9981.637 - 10032.049: 0.2155% ( 4) 00:08:44.333 10032.049 - 10082.462: 0.2829% ( 5) 00:08:44.333 10082.462 - 10132.874: 0.5792% ( 22) 00:08:44.333 10132.874 - 10183.286: 0.6600% ( 6) 00:08:44.333 10183.286 - 10233.698: 0.7408% ( 6) 00:08:44.333 10233.698 - 10284.111: 0.9159% ( 13) 00:08:44.333 10284.111 - 10334.523: 1.0641% ( 11) 00:08:44.333 10334.523 - 10384.935: 1.2931% ( 17) 00:08:44.333 10384.935 - 10435.348: 1.6164% ( 24) 00:08:44.333 10435.348 - 10485.760: 2.0474% ( 32) 00:08:44.334 10485.760 - 10536.172: 2.2495% ( 15) 00:08:44.334 10536.172 - 10586.585: 2.3842% ( 10) 00:08:44.334 10586.585 - 10636.997: 2.6670% ( 21) 00:08:44.334 10636.997 - 10687.409: 2.8152% ( 11) 00:08:44.334 10687.409 - 10737.822: 2.9903% ( 13) 00:08:44.334 10737.822 - 10788.234: 3.1250% ( 10) 00:08:44.334 10788.234 - 10838.646: 3.3001% ( 13) 00:08:44.334 10838.646 - 10889.058: 3.4483% ( 11) 00:08:44.334 10889.058 - 10939.471: 3.6099% ( 12) 00:08:44.334 10939.471 - 10989.883: 3.8254% ( 16) 00:08:44.334 10989.883 - 11040.295: 4.0005% ( 13) 00:08:44.334 11040.295 - 11090.708: 4.1352% ( 10) 00:08:44.334 11090.708 - 11141.120: 4.2430% ( 8) 00:08:44.334 11141.120 - 11191.532: 4.3642% ( 9) 00:08:44.334 11191.532 - 11241.945: 4.4855% ( 9) 00:08:44.334 11241.945 - 11292.357: 4.6336% ( 11) 00:08:44.334 11292.357 - 11342.769: 4.8087% ( 13) 00:08:44.334 11342.769 - 11393.182: 4.9838% ( 13) 00:08:44.334 11393.182 - 11443.594: 5.1859% ( 15) 00:08:44.334 11443.594 - 11494.006: 5.3206% ( 10) 00:08:44.334 11494.006 - 11544.418: 5.5226% ( 15) 00:08:44.334 11544.418 - 11594.831: 5.7651% ( 18) 00:08:44.334 11594.831 - 11645.243: 5.8998% ( 10) 00:08:44.334 11645.243 - 11695.655: 6.0345% ( 10) 00:08:44.334 11695.655 - 11746.068: 6.2635% ( 17) 00:08:44.334 11746.068 - 11796.480: 6.4251% ( 12) 00:08:44.334 11796.480 - 11846.892: 6.6137% ( 14) 00:08:44.334 11846.892 - 11897.305: 6.8157% ( 15) 00:08:44.334 11897.305 - 11947.717: 7.0582% ( 18) 00:08:44.334 11947.717 - 11998.129: 7.4353% ( 28) 00:08:44.334 11998.129 - 12048.542: 8.0415% ( 45) 00:08:44.334 12048.542 - 12098.954: 8.4725% ( 32) 00:08:44.334 12098.954 - 12149.366: 8.7689% ( 22) 00:08:44.334 12149.366 - 12199.778: 8.9844% ( 16) 00:08:44.334 12199.778 - 12250.191: 9.3077% ( 24) 00:08:44.334 12250.191 - 12300.603: 9.6175% ( 23) 00:08:44.334 12300.603 - 12351.015: 9.9407% ( 24) 00:08:44.334 12351.015 - 12401.428: 10.3044% ( 27) 00:08:44.334 12401.428 - 12451.840: 10.7085% ( 30) 00:08:44.334 12451.840 - 12502.252: 11.0722% ( 27) 00:08:44.334 12502.252 - 12552.665: 11.5571% ( 36) 00:08:44.334 12552.665 - 12603.077: 11.9073% ( 26) 00:08:44.334 12603.077 - 12653.489: 12.4865% ( 43) 00:08:44.334 12653.489 - 12703.902: 12.8502% ( 27) 00:08:44.334 12703.902 - 12754.314: 13.1466% ( 22) 00:08:44.334 12754.314 - 12804.726: 13.3621% ( 16) 00:08:44.334 12804.726 - 12855.138: 13.5776% ( 16) 00:08:44.334 12855.138 - 12905.551: 13.6853% ( 8) 00:08:44.334 12905.551 - 13006.375: 13.8335% ( 11) 00:08:44.334 13006.375 - 13107.200: 13.9143% ( 6) 00:08:44.334 13107.200 - 13208.025: 14.0086% ( 7) 00:08:44.334 13208.025 - 13308.849: 14.1972% ( 14) 00:08:44.334 13308.849 - 13409.674: 14.5474% ( 26) 00:08:44.334 13409.674 - 13510.498: 14.8438% ( 22) 00:08:44.334 13510.498 - 13611.323: 15.1266% ( 21) 00:08:44.334 13611.323 - 13712.148: 15.5307% ( 30) 00:08:44.334 13712.148 - 13812.972: 16.1099% ( 43) 00:08:44.334 13812.972 - 13913.797: 16.6622% ( 41) 00:08:44.334 13913.797 - 14014.622: 16.9989% ( 25) 00:08:44.334 14014.622 - 14115.446: 17.2683% ( 20) 00:08:44.334 14115.446 - 14216.271: 17.6185% ( 26) 00:08:44.334 14216.271 - 14317.095: 18.1439% ( 39) 00:08:44.334 14317.095 - 14417.920: 18.4402% ( 22) 00:08:44.334 14417.920 - 14518.745: 18.7096% ( 20) 00:08:44.334 14518.745 - 14619.569: 18.9925% ( 21) 00:08:44.334 14619.569 - 14720.394: 19.2619% ( 20) 00:08:44.334 14720.394 - 14821.218: 19.5312% ( 20) 00:08:44.334 14821.218 - 14922.043: 19.9219% ( 29) 00:08:44.334 14922.043 - 15022.868: 20.2047% ( 21) 00:08:44.334 15022.868 - 15123.692: 20.5011% ( 22) 00:08:44.334 15123.692 - 15224.517: 20.8917% ( 29) 00:08:44.334 15224.517 - 15325.342: 21.2419% ( 26) 00:08:44.334 15325.342 - 15426.166: 21.4978% ( 19) 00:08:44.334 15426.166 - 15526.991: 21.5921% ( 7) 00:08:44.334 15526.991 - 15627.815: 21.6864% ( 7) 00:08:44.334 15627.815 - 15728.640: 21.7942% ( 8) 00:08:44.334 15728.640 - 15829.465: 22.1309% ( 25) 00:08:44.334 15829.465 - 15930.289: 22.2117% ( 6) 00:08:44.334 15930.289 - 16031.114: 22.2522% ( 3) 00:08:44.334 16031.114 - 16131.938: 22.2926% ( 3) 00:08:44.334 16131.938 - 16232.763: 22.3195% ( 2) 00:08:44.334 16232.763 - 16333.588: 22.3464% ( 2) 00:08:44.334 16333.588 - 16434.412: 22.3869% ( 3) 00:08:44.334 16434.412 - 16535.237: 22.4273% ( 3) 00:08:44.334 16535.237 - 16636.062: 22.5889% ( 12) 00:08:44.334 16636.062 - 16736.886: 22.8179% ( 17) 00:08:44.334 16736.886 - 16837.711: 23.1412% ( 24) 00:08:44.334 16837.711 - 16938.535: 23.5453% ( 30) 00:08:44.334 16938.535 - 17039.360: 24.2726% ( 54) 00:08:44.334 17039.360 - 17140.185: 25.3502% ( 80) 00:08:44.334 17140.185 - 17241.009: 26.7107% ( 101) 00:08:44.334 17241.009 - 17341.834: 28.3944% ( 125) 00:08:44.334 17341.834 - 17442.658: 30.2802% ( 140) 00:08:44.334 17442.658 - 17543.483: 32.6643% ( 177) 00:08:44.334 17543.483 - 17644.308: 35.0620% ( 178) 00:08:44.334 17644.308 - 17745.132: 37.8906% ( 210) 00:08:44.334 17745.132 - 17845.957: 42.1606% ( 317) 00:08:44.334 17845.957 - 17946.782: 45.9052% ( 278) 00:08:44.334 17946.782 - 18047.606: 49.9865% ( 303) 00:08:44.334 18047.606 - 18148.431: 54.4585% ( 332) 00:08:44.334 18148.431 - 18249.255: 57.8394% ( 251) 00:08:44.334 18249.255 - 18350.080: 61.8534% ( 298) 00:08:44.334 18350.080 - 18450.905: 65.5442% ( 274) 00:08:44.334 18450.905 - 18551.729: 69.8141% ( 317) 00:08:44.334 18551.729 - 18652.554: 73.8012% ( 296) 00:08:44.334 18652.554 - 18753.378: 77.2225% ( 254) 00:08:44.334 18753.378 - 18854.203: 79.7144% ( 185) 00:08:44.334 18854.203 - 18955.028: 82.0043% ( 170) 00:08:44.334 18955.028 - 19055.852: 84.2268% ( 165) 00:08:44.334 19055.852 - 19156.677: 86.1126% ( 140) 00:08:44.334 19156.677 - 19257.502: 87.9580% ( 137) 00:08:44.334 19257.502 - 19358.326: 89.4666% ( 112) 00:08:44.334 19358.326 - 19459.151: 90.7328% ( 94) 00:08:44.334 19459.151 - 19559.975: 91.8373% ( 82) 00:08:44.334 19559.975 - 19660.800: 92.6185% ( 58) 00:08:44.334 19660.800 - 19761.625: 93.3594% ( 55) 00:08:44.334 19761.625 - 19862.449: 94.1945% ( 62) 00:08:44.334 19862.449 - 19963.274: 94.7198% ( 39) 00:08:44.334 19963.274 - 20064.098: 95.1239% ( 30) 00:08:44.334 20064.098 - 20164.923: 95.4876% ( 27) 00:08:44.334 20164.923 - 20265.748: 95.8648% ( 28) 00:08:44.334 20265.748 - 20366.572: 96.2015% ( 25) 00:08:44.334 20366.572 - 20467.397: 96.8885% ( 51) 00:08:44.334 20467.397 - 20568.222: 97.2252% ( 25) 00:08:44.334 20568.222 - 20669.046: 97.4811% ( 19) 00:08:44.334 20669.046 - 20769.871: 97.6562% ( 13) 00:08:44.334 20769.871 - 20870.695: 97.8044% ( 11) 00:08:44.334 20870.695 - 20971.520: 97.9391% ( 10) 00:08:44.334 20971.520 - 21072.345: 98.0469% ( 8) 00:08:44.334 21072.345 - 21173.169: 98.1412% ( 7) 00:08:44.334 21173.169 - 21273.994: 98.1816% ( 3) 00:08:44.334 21273.994 - 21374.818: 98.2085% ( 2) 00:08:44.334 21374.818 - 21475.643: 98.2355% ( 2) 00:08:44.334 21475.643 - 21576.468: 98.2624% ( 2) 00:08:44.334 21576.468 - 21677.292: 98.2759% ( 1) 00:08:44.334 26617.698 - 26819.348: 98.3163% ( 3) 00:08:44.334 26819.348 - 27020.997: 98.4240% ( 8) 00:08:44.334 27020.997 - 27222.646: 98.5318% ( 8) 00:08:44.334 27222.646 - 27424.295: 98.6395% ( 8) 00:08:44.334 27424.295 - 27625.945: 98.7473% ( 8) 00:08:44.334 27625.945 - 27827.594: 98.8551% ( 8) 00:08:44.334 27827.594 - 28029.243: 98.9763% ( 9) 00:08:44.334 28029.243 - 28230.892: 99.0841% ( 8) 00:08:44.334 28230.892 - 28432.542: 99.1379% ( 4) 00:08:44.334 34482.018 - 34683.668: 99.1783% ( 3) 00:08:44.334 34683.668 - 34885.317: 99.2861% ( 8) 00:08:44.334 34885.317 - 35086.966: 99.3939% ( 8) 00:08:44.334 35086.966 - 35288.615: 99.5016% ( 8) 00:08:44.334 35288.615 - 35490.265: 99.6094% ( 8) 00:08:44.334 35490.265 - 35691.914: 99.7306% ( 9) 00:08:44.334 35691.914 - 35893.563: 99.8384% ( 8) 00:08:44.334 35893.563 - 36095.212: 99.9461% ( 8) 00:08:44.334 36095.212 - 36296.862: 100.0000% ( 4) 00:08:44.334 00:08:44.334 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:08:44.334 ============================================================================== 00:08:44.334 Range in us Cumulative IO count 00:08:44.334 9931.225 - 9981.637: 0.0135% ( 1) 00:08:44.334 10032.049 - 10082.462: 0.0269% ( 1) 00:08:44.334 10082.462 - 10132.874: 0.0673% ( 3) 00:08:44.334 10132.874 - 10183.286: 0.1212% ( 4) 00:08:44.334 10183.286 - 10233.698: 0.2425% ( 9) 00:08:44.334 10233.698 - 10284.111: 0.3502% ( 8) 00:08:44.334 10284.111 - 10334.523: 0.5657% ( 16) 00:08:44.334 10334.523 - 10384.935: 0.7947% ( 17) 00:08:44.334 10384.935 - 10435.348: 1.0641% ( 20) 00:08:44.334 10435.348 - 10485.760: 1.3874% ( 24) 00:08:44.334 10485.760 - 10536.172: 1.6433% ( 19) 00:08:44.334 10536.172 - 10586.585: 1.9262% ( 21) 00:08:44.334 10586.585 - 10636.997: 2.1552% ( 17) 00:08:44.334 10636.997 - 10687.409: 2.3842% ( 17) 00:08:44.334 10687.409 - 10737.822: 2.8017% ( 31) 00:08:44.334 10737.822 - 10788.234: 3.1115% ( 23) 00:08:44.334 10788.234 - 10838.646: 3.3675% ( 19) 00:08:44.334 10838.646 - 10889.058: 3.6369% ( 20) 00:08:44.334 10889.058 - 10939.471: 3.8389% ( 15) 00:08:44.334 10939.471 - 10989.883: 3.9601% ( 9) 00:08:44.334 10989.883 - 11040.295: 4.0140% ( 4) 00:08:44.334 11040.295 - 11090.708: 4.0948% ( 6) 00:08:44.334 11090.708 - 11141.120: 4.1756% ( 6) 00:08:44.334 11141.120 - 11191.532: 4.2430% ( 5) 00:08:44.334 11191.532 - 11241.945: 4.3103% ( 5) 00:08:44.334 11241.945 - 11292.357: 4.3508% ( 3) 00:08:44.334 11292.357 - 11342.769: 4.4450% ( 7) 00:08:44.334 11342.769 - 11393.182: 4.5393% ( 7) 00:08:44.334 11393.182 - 11443.594: 4.6202% ( 6) 00:08:44.334 11443.594 - 11494.006: 4.7279% ( 8) 00:08:44.335 11494.006 - 11544.418: 5.2802% ( 41) 00:08:44.335 11544.418 - 11594.831: 5.4688% ( 14) 00:08:44.335 11594.831 - 11645.243: 5.6843% ( 16) 00:08:44.335 11645.243 - 11695.655: 5.8998% ( 16) 00:08:44.335 11695.655 - 11746.068: 6.2769% ( 28) 00:08:44.335 11746.068 - 11796.480: 6.5329% ( 19) 00:08:44.335 11796.480 - 11846.892: 6.7753% ( 18) 00:08:44.335 11846.892 - 11897.305: 6.9639% ( 14) 00:08:44.335 11897.305 - 11947.717: 7.2064% ( 18) 00:08:44.335 11947.717 - 11998.129: 7.3276% ( 9) 00:08:44.335 11998.129 - 12048.542: 7.4892% ( 12) 00:08:44.335 12048.542 - 12098.954: 7.8394% ( 26) 00:08:44.335 12098.954 - 12149.366: 8.1897% ( 26) 00:08:44.335 12149.366 - 12199.778: 8.4321% ( 18) 00:08:44.335 12199.778 - 12250.191: 8.9574% ( 39) 00:08:44.335 12250.191 - 12300.603: 9.4154% ( 34) 00:08:44.335 12300.603 - 12351.015: 9.8060% ( 29) 00:08:44.335 12351.015 - 12401.428: 10.1697% ( 27) 00:08:44.335 12401.428 - 12451.840: 10.6412% ( 35) 00:08:44.335 12451.840 - 12502.252: 11.0722% ( 32) 00:08:44.335 12502.252 - 12552.665: 11.3955% ( 24) 00:08:44.335 12552.665 - 12603.077: 11.5975% ( 15) 00:08:44.335 12603.077 - 12653.489: 11.8939% ( 22) 00:08:44.335 12653.489 - 12703.902: 12.2441% ( 26) 00:08:44.335 12703.902 - 12754.314: 12.5000% ( 19) 00:08:44.335 12754.314 - 12804.726: 12.7694% ( 20) 00:08:44.335 12804.726 - 12855.138: 13.0927% ( 24) 00:08:44.335 12855.138 - 12905.551: 13.2812% ( 14) 00:08:44.335 12905.551 - 13006.375: 13.6315% ( 26) 00:08:44.335 13006.375 - 13107.200: 14.0221% ( 29) 00:08:44.335 13107.200 - 13208.025: 14.3319% ( 23) 00:08:44.335 13208.025 - 13308.849: 14.6148% ( 21) 00:08:44.335 13308.849 - 13409.674: 14.8033% ( 14) 00:08:44.335 13409.674 - 13510.498: 15.0458% ( 18) 00:08:44.335 13510.498 - 13611.323: 15.3960% ( 26) 00:08:44.335 13611.323 - 13712.148: 15.8001% ( 30) 00:08:44.335 13712.148 - 13812.972: 16.0291% ( 17) 00:08:44.335 13812.972 - 13913.797: 16.3389% ( 23) 00:08:44.335 13913.797 - 14014.622: 16.8642% ( 39) 00:08:44.335 14014.622 - 14115.446: 17.4434% ( 43) 00:08:44.335 14115.446 - 14216.271: 18.1304% ( 51) 00:08:44.335 14216.271 - 14317.095: 18.6018% ( 35) 00:08:44.335 14317.095 - 14417.920: 18.9790% ( 28) 00:08:44.335 14417.920 - 14518.745: 19.1810% ( 15) 00:08:44.335 14518.745 - 14619.569: 19.5043% ( 24) 00:08:44.335 14619.569 - 14720.394: 19.7198% ( 16) 00:08:44.335 14720.394 - 14821.218: 19.9219% ( 15) 00:08:44.335 14821.218 - 14922.043: 20.4068% ( 36) 00:08:44.335 14922.043 - 15022.868: 20.5819% ( 13) 00:08:44.335 15022.868 - 15123.692: 20.7974% ( 16) 00:08:44.335 15123.692 - 15224.517: 20.9591% ( 12) 00:08:44.335 15224.517 - 15325.342: 21.0668% ( 8) 00:08:44.335 15325.342 - 15426.166: 21.1611% ( 7) 00:08:44.335 15426.166 - 15526.991: 21.2419% ( 6) 00:08:44.335 15526.991 - 15627.815: 21.5248% ( 21) 00:08:44.335 15627.815 - 15728.640: 21.6325% ( 8) 00:08:44.335 15728.640 - 15829.465: 21.6864% ( 4) 00:08:44.335 15829.465 - 15930.289: 21.7403% ( 4) 00:08:44.335 15930.289 - 16031.114: 21.8885% ( 11) 00:08:44.335 16031.114 - 16131.938: 21.9962% ( 8) 00:08:44.335 16131.938 - 16232.763: 22.0905% ( 7) 00:08:44.335 16232.763 - 16333.588: 22.1579% ( 5) 00:08:44.335 16333.588 - 16434.412: 22.2117% ( 4) 00:08:44.335 16434.412 - 16535.237: 22.3464% ( 10) 00:08:44.335 16535.237 - 16636.062: 22.5350% ( 14) 00:08:44.335 16636.062 - 16736.886: 22.8179% ( 21) 00:08:44.335 16736.886 - 16837.711: 23.1008% ( 21) 00:08:44.335 16837.711 - 16938.535: 23.3567% ( 19) 00:08:44.335 16938.535 - 17039.360: 23.9224% ( 42) 00:08:44.335 17039.360 - 17140.185: 24.9461% ( 76) 00:08:44.335 17140.185 - 17241.009: 26.1584% ( 90) 00:08:44.335 17241.009 - 17341.834: 27.8691% ( 127) 00:08:44.335 17341.834 - 17442.658: 29.8222% ( 145) 00:08:44.335 17442.658 - 17543.483: 32.1794% ( 175) 00:08:44.335 17543.483 - 17644.308: 34.5905% ( 179) 00:08:44.335 17644.308 - 17745.132: 38.1061% ( 261) 00:08:44.335 17745.132 - 17845.957: 42.0259% ( 291) 00:08:44.335 17845.957 - 17946.782: 46.3631% ( 322) 00:08:44.335 17946.782 - 18047.606: 50.8217% ( 331) 00:08:44.335 18047.606 - 18148.431: 56.0480% ( 388) 00:08:44.335 18148.431 - 18249.255: 59.7252% ( 273) 00:08:44.335 18249.255 - 18350.080: 62.9714% ( 241) 00:08:44.335 18350.080 - 18450.905: 67.0393% ( 302) 00:08:44.335 18450.905 - 18551.729: 70.4203% ( 251) 00:08:44.335 18551.729 - 18652.554: 74.1783% ( 279) 00:08:44.335 18652.554 - 18753.378: 77.1552% ( 221) 00:08:44.335 18753.378 - 18854.203: 79.8357% ( 199) 00:08:44.335 18854.203 - 18955.028: 82.0986% ( 168) 00:08:44.335 18955.028 - 19055.852: 84.1460% ( 152) 00:08:44.335 19055.852 - 19156.677: 85.8432% ( 126) 00:08:44.335 19156.677 - 19257.502: 87.6212% ( 132) 00:08:44.335 19257.502 - 19358.326: 89.0625% ( 107) 00:08:44.335 19358.326 - 19459.151: 90.3017% ( 92) 00:08:44.335 19459.151 - 19559.975: 91.5814% ( 95) 00:08:44.335 19559.975 - 19660.800: 92.7128% ( 84) 00:08:44.335 19660.800 - 19761.625: 93.4941% ( 58) 00:08:44.335 19761.625 - 19862.449: 94.1810% ( 51) 00:08:44.335 19862.449 - 19963.274: 94.7737% ( 44) 00:08:44.335 19963.274 - 20064.098: 95.1778% ( 30) 00:08:44.335 20064.098 - 20164.923: 95.5280% ( 26) 00:08:44.335 20164.923 - 20265.748: 95.9052% ( 28) 00:08:44.335 20265.748 - 20366.572: 96.1746% ( 20) 00:08:44.335 20366.572 - 20467.397: 96.4978% ( 24) 00:08:44.335 20467.397 - 20568.222: 97.0636% ( 42) 00:08:44.335 20568.222 - 20669.046: 97.3060% ( 18) 00:08:44.335 20669.046 - 20769.871: 97.6024% ( 22) 00:08:44.335 20769.871 - 20870.695: 97.7505% ( 11) 00:08:44.335 20870.695 - 20971.520: 97.8987% ( 11) 00:08:44.335 20971.520 - 21072.345: 98.0199% ( 9) 00:08:44.335 21072.345 - 21173.169: 98.1277% ( 8) 00:08:44.335 21173.169 - 21273.994: 98.1950% ( 5) 00:08:44.335 21273.994 - 21374.818: 98.2355% ( 3) 00:08:44.335 21374.818 - 21475.643: 98.2624% ( 2) 00:08:44.335 21475.643 - 21576.468: 98.2759% ( 1) 00:08:44.335 25609.452 - 25710.277: 98.3163% ( 3) 00:08:44.335 25710.277 - 25811.102: 98.3702% ( 4) 00:08:44.335 25811.102 - 26012.751: 98.4779% ( 8) 00:08:44.335 26012.751 - 26214.400: 98.5857% ( 8) 00:08:44.335 26214.400 - 26416.049: 98.6934% ( 8) 00:08:44.335 26416.049 - 26617.698: 98.8147% ( 9) 00:08:44.335 26617.698 - 26819.348: 98.9224% ( 8) 00:08:44.335 26819.348 - 27020.997: 99.0302% ( 8) 00:08:44.335 27020.997 - 27222.646: 99.1379% ( 8) 00:08:44.335 33070.474 - 33272.123: 99.1649% ( 2) 00:08:44.335 33272.123 - 33473.772: 99.2592% ( 7) 00:08:44.335 33473.772 - 33675.422: 99.3669% ( 8) 00:08:44.335 33675.422 - 33877.071: 99.4747% ( 8) 00:08:44.335 33877.071 - 34078.720: 99.5824% ( 8) 00:08:44.335 34078.720 - 34280.369: 99.6902% ( 8) 00:08:44.335 34280.369 - 34482.018: 99.7980% ( 8) 00:08:44.335 34482.018 - 34683.668: 99.9192% ( 9) 00:08:44.335 34683.668 - 34885.317: 100.0000% ( 6) 00:08:44.335 00:08:44.335 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:08:44.335 ============================================================================== 00:08:44.335 Range in us Cumulative IO count 00:08:44.335 9376.689 - 9427.102: 0.0134% ( 1) 00:08:44.335 9578.338 - 9628.751: 0.0534% ( 3) 00:08:44.335 9628.751 - 9679.163: 0.1202% ( 5) 00:08:44.335 9679.163 - 9729.575: 0.2137% ( 7) 00:08:44.335 9729.575 - 9779.988: 0.3339% ( 9) 00:08:44.335 9779.988 - 9830.400: 0.4808% ( 11) 00:08:44.335 9830.400 - 9880.812: 0.5609% ( 6) 00:08:44.335 9880.812 - 9931.225: 0.6410% ( 6) 00:08:44.335 9931.225 - 9981.637: 0.6944% ( 4) 00:08:44.335 9981.637 - 10032.049: 0.7345% ( 3) 00:08:44.335 10032.049 - 10082.462: 0.7612% ( 2) 00:08:44.335 10082.462 - 10132.874: 0.8681% ( 8) 00:08:44.335 10132.874 - 10183.286: 0.9615% ( 7) 00:08:44.335 10183.286 - 10233.698: 1.0550% ( 7) 00:08:44.335 10233.698 - 10284.111: 1.1752% ( 9) 00:08:44.335 10284.111 - 10334.523: 1.3622% ( 14) 00:08:44.335 10334.523 - 10384.935: 1.4690% ( 8) 00:08:44.335 10384.935 - 10435.348: 1.5892% ( 9) 00:08:44.335 10435.348 - 10485.760: 1.7228% ( 10) 00:08:44.336 10485.760 - 10536.172: 1.8429% ( 9) 00:08:44.336 10536.172 - 10586.585: 1.9899% ( 11) 00:08:44.336 10586.585 - 10636.997: 2.2169% ( 17) 00:08:44.336 10636.997 - 10687.409: 2.3371% ( 9) 00:08:44.336 10687.409 - 10737.822: 2.4973% ( 12) 00:08:44.336 10737.822 - 10788.234: 2.7244% ( 17) 00:08:44.336 10788.234 - 10838.646: 3.0449% ( 24) 00:08:44.336 10838.646 - 10889.058: 3.4188% ( 28) 00:08:44.336 10889.058 - 10939.471: 3.7260% ( 23) 00:08:44.336 10939.471 - 10989.883: 3.9797% ( 19) 00:08:44.336 10989.883 - 11040.295: 4.2601% ( 21) 00:08:44.336 11040.295 - 11090.708: 4.5272% ( 20) 00:08:44.336 11090.708 - 11141.120: 4.6741% ( 11) 00:08:44.336 11141.120 - 11191.532: 4.7943% ( 9) 00:08:44.336 11191.532 - 11241.945: 4.8611% ( 5) 00:08:44.336 11241.945 - 11292.357: 4.9279% ( 5) 00:08:44.336 11292.357 - 11342.769: 5.0347% ( 8) 00:08:44.336 11342.769 - 11393.182: 5.1816% ( 11) 00:08:44.336 11393.182 - 11443.594: 5.3018% ( 9) 00:08:44.336 11443.594 - 11494.006: 5.4354% ( 10) 00:08:44.336 11494.006 - 11544.418: 5.5556% ( 9) 00:08:44.336 11544.418 - 11594.831: 5.6891% ( 10) 00:08:44.336 11594.831 - 11645.243: 5.9161% ( 17) 00:08:44.336 11645.243 - 11695.655: 6.1565% ( 18) 00:08:44.336 11695.655 - 11746.068: 6.3301% ( 13) 00:08:44.336 11746.068 - 11796.480: 6.5037% ( 13) 00:08:44.336 11796.480 - 11846.892: 6.6907% ( 14) 00:08:44.336 11846.892 - 11897.305: 6.9444% ( 19) 00:08:44.336 11897.305 - 11947.717: 7.1448% ( 15) 00:08:44.336 11947.717 - 11998.129: 7.2917% ( 11) 00:08:44.336 11998.129 - 12048.542: 7.4252% ( 10) 00:08:44.336 12048.542 - 12098.954: 7.7057% ( 21) 00:08:44.336 12098.954 - 12149.366: 8.0796% ( 28) 00:08:44.336 12149.366 - 12199.778: 8.4535% ( 28) 00:08:44.336 12199.778 - 12250.191: 8.9877% ( 40) 00:08:44.336 12250.191 - 12300.603: 9.5353% ( 41) 00:08:44.336 12300.603 - 12351.015: 9.8691% ( 25) 00:08:44.336 12351.015 - 12401.428: 10.0694% ( 15) 00:08:44.336 12401.428 - 12451.840: 10.2831% ( 16) 00:08:44.336 12451.840 - 12502.252: 10.5369% ( 19) 00:08:44.336 12502.252 - 12552.665: 10.7639% ( 17) 00:08:44.336 12552.665 - 12603.077: 11.0443% ( 21) 00:08:44.336 12603.077 - 12653.489: 11.2981% ( 19) 00:08:44.336 12653.489 - 12703.902: 11.5251% ( 17) 00:08:44.336 12703.902 - 12754.314: 11.7254% ( 15) 00:08:44.336 12754.314 - 12804.726: 12.0059% ( 21) 00:08:44.336 12804.726 - 12855.138: 12.3264% ( 24) 00:08:44.336 12855.138 - 12905.551: 12.6469% ( 24) 00:08:44.336 12905.551 - 13006.375: 13.2212% ( 43) 00:08:44.336 13006.375 - 13107.200: 13.8622% ( 48) 00:08:44.336 13107.200 - 13208.025: 14.3964% ( 40) 00:08:44.336 13208.025 - 13308.849: 15.0107% ( 46) 00:08:44.336 13308.849 - 13409.674: 15.9989% ( 74) 00:08:44.336 13409.674 - 13510.498: 16.5999% ( 45) 00:08:44.336 13510.498 - 13611.323: 16.8670% ( 20) 00:08:44.336 13611.323 - 13712.148: 17.1207% ( 19) 00:08:44.336 13712.148 - 13812.972: 17.4546% ( 25) 00:08:44.336 13812.972 - 13913.797: 17.7217% ( 20) 00:08:44.336 13913.797 - 14014.622: 17.8819% ( 12) 00:08:44.336 14014.622 - 14115.446: 17.9354% ( 4) 00:08:44.336 14115.446 - 14216.271: 17.9487% ( 1) 00:08:44.336 14317.095 - 14417.920: 18.0956% ( 11) 00:08:44.336 14417.920 - 14518.745: 18.3226% ( 17) 00:08:44.336 14518.745 - 14619.569: 18.7366% ( 31) 00:08:44.336 14619.569 - 14720.394: 19.4444% ( 53) 00:08:44.336 14720.394 - 14821.218: 20.1522% ( 53) 00:08:44.336 14821.218 - 14922.043: 20.7131% ( 42) 00:08:44.336 14922.043 - 15022.868: 21.1004% ( 29) 00:08:44.336 15022.868 - 15123.692: 21.4343% ( 25) 00:08:44.336 15123.692 - 15224.517: 21.5678% ( 10) 00:08:44.336 15224.517 - 15325.342: 21.6213% ( 4) 00:08:44.336 15325.342 - 15426.166: 21.7949% ( 13) 00:08:44.336 15426.166 - 15526.991: 21.9551% ( 12) 00:08:44.336 15526.991 - 15627.815: 22.0085% ( 4) 00:08:44.336 15627.815 - 15728.640: 22.0353% ( 2) 00:08:44.336 15728.640 - 15829.465: 22.0753% ( 3) 00:08:44.336 15829.465 - 15930.289: 22.0887% ( 1) 00:08:44.336 15930.289 - 16031.114: 22.1421% ( 4) 00:08:44.336 16031.114 - 16131.938: 22.2222% ( 6) 00:08:44.336 16131.938 - 16232.763: 22.3825% ( 12) 00:08:44.336 16232.763 - 16333.588: 22.5027% ( 9) 00:08:44.336 16333.588 - 16434.412: 22.6095% ( 8) 00:08:44.336 16434.412 - 16535.237: 22.7698% ( 12) 00:08:44.336 16535.237 - 16636.062: 22.9167% ( 11) 00:08:44.336 16636.062 - 16736.886: 23.2639% ( 26) 00:08:44.336 16736.886 - 16837.711: 23.7179% ( 34) 00:08:44.336 16837.711 - 16938.535: 24.4792% ( 57) 00:08:44.336 16938.535 - 17039.360: 25.2404% ( 57) 00:08:44.336 17039.360 - 17140.185: 26.0150% ( 58) 00:08:44.336 17140.185 - 17241.009: 27.1501% ( 85) 00:08:44.336 17241.009 - 17341.834: 28.6725% ( 114) 00:08:44.336 17341.834 - 17442.658: 30.6223% ( 146) 00:08:44.336 17442.658 - 17543.483: 32.9193% ( 172) 00:08:44.336 17543.483 - 17644.308: 35.4567% ( 190) 00:08:44.336 17644.308 - 17745.132: 38.6752% ( 241) 00:08:44.336 17745.132 - 17845.957: 42.6015% ( 294) 00:08:44.336 17845.957 - 17946.782: 46.5545% ( 296) 00:08:44.336 17946.782 - 18047.606: 51.7762% ( 391) 00:08:44.336 18047.606 - 18148.431: 55.8494% ( 305) 00:08:44.336 18148.431 - 18249.255: 59.4551% ( 270) 00:08:44.336 18249.255 - 18350.080: 62.7404% ( 246) 00:08:44.336 18350.080 - 18450.905: 67.0940% ( 326) 00:08:44.336 18450.905 - 18551.729: 71.2206% ( 309) 00:08:44.336 18551.729 - 18652.554: 74.5726% ( 251) 00:08:44.336 18652.554 - 18753.378: 77.9647% ( 254) 00:08:44.336 18753.378 - 18854.203: 80.5422% ( 193) 00:08:44.336 18854.203 - 18955.028: 83.1731% ( 197) 00:08:44.336 18955.028 - 19055.852: 85.3632% ( 164) 00:08:44.336 19055.852 - 19156.677: 87.0726% ( 128) 00:08:44.336 19156.677 - 19257.502: 88.6752% ( 120) 00:08:44.336 19257.502 - 19358.326: 90.0374% ( 102) 00:08:44.336 19358.326 - 19459.151: 91.4931% ( 109) 00:08:44.336 19459.151 - 19559.975: 92.6149% ( 84) 00:08:44.336 19559.975 - 19660.800: 93.4963% ( 66) 00:08:44.336 19660.800 - 19761.625: 94.2975% ( 60) 00:08:44.336 19761.625 - 19862.449: 95.1522% ( 64) 00:08:44.336 19862.449 - 19963.274: 95.6998% ( 41) 00:08:44.336 19963.274 - 20064.098: 96.0604% ( 27) 00:08:44.336 20064.098 - 20164.923: 96.3809% ( 24) 00:08:44.336 20164.923 - 20265.748: 96.6613% ( 21) 00:08:44.336 20265.748 - 20366.572: 97.0620% ( 30) 00:08:44.336 20366.572 - 20467.397: 97.7698% ( 53) 00:08:44.336 20467.397 - 20568.222: 98.1437% ( 28) 00:08:44.336 20568.222 - 20669.046: 98.3974% ( 19) 00:08:44.336 20669.046 - 20769.871: 98.5710% ( 13) 00:08:44.336 20769.871 - 20870.695: 98.6912% ( 9) 00:08:44.336 20870.695 - 20971.520: 98.8248% ( 10) 00:08:44.336 20971.520 - 21072.345: 98.9316% ( 8) 00:08:44.336 21072.345 - 21173.169: 99.0118% ( 6) 00:08:44.336 21173.169 - 21273.994: 99.0652% ( 4) 00:08:44.336 21273.994 - 21374.818: 99.0919% ( 2) 00:08:44.336 21374.818 - 21475.643: 99.1186% ( 2) 00:08:44.336 21475.643 - 21576.468: 99.1453% ( 2) 00:08:44.336 25609.452 - 25710.277: 99.1854% ( 3) 00:08:44.336 25710.277 - 25811.102: 99.2254% ( 3) 00:08:44.336 25811.102 - 26012.751: 99.3456% ( 9) 00:08:44.336 26012.751 - 26214.400: 99.4525% ( 8) 00:08:44.336 26214.400 - 26416.049: 99.5593% ( 8) 00:08:44.336 26416.049 - 26617.698: 99.6661% ( 8) 00:08:44.336 26617.698 - 26819.348: 99.7730% ( 8) 00:08:44.336 26819.348 - 27020.997: 99.8798% ( 8) 00:08:44.336 27020.997 - 27222.646: 99.9866% ( 8) 00:08:44.336 27222.646 - 27424.295: 100.0000% ( 1) 00:08:44.336 00:08:44.336 04:25:40 nvme.nvme_perf -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:08:44.336 00:08:44.336 real 0m2.596s 00:08:44.336 user 0m2.279s 00:08:44.336 sys 0m0.199s 00:08:44.336 04:25:40 nvme.nvme_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:44.336 04:25:40 nvme.nvme_perf -- common/autotest_common.sh@10 -- # set +x 00:08:44.336 ************************************ 00:08:44.336 END TEST nvme_perf 00:08:44.336 ************************************ 00:08:44.336 04:25:40 nvme -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:08:44.336 04:25:40 nvme -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:08:44.336 04:25:40 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:44.336 04:25:40 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:44.336 ************************************ 00:08:44.336 START TEST nvme_hello_world 00:08:44.336 ************************************ 00:08:44.336 04:25:40 nvme.nvme_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:08:44.598 Initializing NVMe Controllers 00:08:44.598 Attached to 0000:00:13.0 00:08:44.598 Namespace ID: 1 size: 1GB 00:08:44.598 Attached to 0000:00:10.0 00:08:44.598 Namespace ID: 1 size: 6GB 00:08:44.598 Attached to 0000:00:11.0 00:08:44.598 Namespace ID: 1 size: 5GB 00:08:44.598 Attached to 0000:00:12.0 00:08:44.598 Namespace ID: 1 size: 4GB 00:08:44.598 Namespace ID: 2 size: 4GB 00:08:44.598 Namespace ID: 3 size: 4GB 00:08:44.598 Initialization complete. 00:08:44.598 INFO: using host memory buffer for IO 00:08:44.598 Hello world! 00:08:44.598 INFO: using host memory buffer for IO 00:08:44.598 Hello world! 00:08:44.598 INFO: using host memory buffer for IO 00:08:44.598 Hello world! 00:08:44.598 INFO: using host memory buffer for IO 00:08:44.598 Hello world! 00:08:44.598 INFO: using host memory buffer for IO 00:08:44.598 Hello world! 00:08:44.598 INFO: using host memory buffer for IO 00:08:44.598 Hello world! 00:08:44.598 00:08:44.598 real 0m0.268s 00:08:44.598 user 0m0.127s 00:08:44.598 sys 0m0.097s 00:08:44.598 04:25:41 nvme.nvme_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:44.598 04:25:41 nvme.nvme_hello_world -- common/autotest_common.sh@10 -- # set +x 00:08:44.598 ************************************ 00:08:44.598 END TEST nvme_hello_world 00:08:44.598 ************************************ 00:08:44.598 04:25:41 nvme -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:08:44.598 04:25:41 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:44.598 04:25:41 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:44.598 04:25:41 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:44.598 ************************************ 00:08:44.598 START TEST nvme_sgl 00:08:44.598 ************************************ 00:08:44.598 04:25:41 nvme.nvme_sgl -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:08:44.859 0000:00:13.0: build_io_request_0 Invalid IO length parameter 00:08:44.859 0000:00:13.0: build_io_request_1 Invalid IO length parameter 00:08:44.859 0000:00:13.0: build_io_request_2 Invalid IO length parameter 00:08:44.859 0000:00:13.0: build_io_request_3 Invalid IO length parameter 00:08:44.859 0000:00:13.0: build_io_request_4 Invalid IO length parameter 00:08:44.859 0000:00:13.0: build_io_request_5 Invalid IO length parameter 00:08:44.859 0000:00:13.0: build_io_request_6 Invalid IO length parameter 00:08:44.859 0000:00:13.0: build_io_request_7 Invalid IO length parameter 00:08:44.859 0000:00:13.0: build_io_request_8 Invalid IO length parameter 00:08:44.859 0000:00:13.0: build_io_request_9 Invalid IO length parameter 00:08:44.859 0000:00:13.0: build_io_request_10 Invalid IO length parameter 00:08:44.859 0000:00:13.0: build_io_request_11 Invalid IO length parameter 00:08:44.859 0000:00:10.0: build_io_request_0 Invalid IO length parameter 00:08:44.859 0000:00:10.0: build_io_request_1 Invalid IO length parameter 00:08:44.859 0000:00:10.0: build_io_request_3 Invalid IO length parameter 00:08:44.859 0000:00:10.0: build_io_request_8 Invalid IO length parameter 00:08:44.859 0000:00:10.0: build_io_request_9 Invalid IO length parameter 00:08:44.859 0000:00:10.0: build_io_request_11 Invalid IO length parameter 00:08:44.859 0000:00:11.0: build_io_request_0 Invalid IO length parameter 00:08:44.859 0000:00:11.0: build_io_request_1 Invalid IO length parameter 00:08:44.859 0000:00:11.0: build_io_request_3 Invalid IO length parameter 00:08:44.859 0000:00:11.0: build_io_request_8 Invalid IO length parameter 00:08:44.859 0000:00:11.0: build_io_request_9 Invalid IO length parameter 00:08:44.859 0000:00:11.0: build_io_request_11 Invalid IO length parameter 00:08:44.859 0000:00:12.0: build_io_request_0 Invalid IO length parameter 00:08:44.859 0000:00:12.0: build_io_request_1 Invalid IO length parameter 00:08:44.859 0000:00:12.0: build_io_request_2 Invalid IO length parameter 00:08:44.859 0000:00:12.0: build_io_request_3 Invalid IO length parameter 00:08:44.859 0000:00:12.0: build_io_request_4 Invalid IO length parameter 00:08:44.859 0000:00:12.0: build_io_request_5 Invalid IO length parameter 00:08:44.859 0000:00:12.0: build_io_request_6 Invalid IO length parameter 00:08:44.859 0000:00:12.0: build_io_request_7 Invalid IO length parameter 00:08:44.859 0000:00:12.0: build_io_request_8 Invalid IO length parameter 00:08:44.859 0000:00:12.0: build_io_request_9 Invalid IO length parameter 00:08:44.859 0000:00:12.0: build_io_request_10 Invalid IO length parameter 00:08:44.859 0000:00:12.0: build_io_request_11 Invalid IO length parameter 00:08:44.859 NVMe Readv/Writev Request test 00:08:44.859 Attached to 0000:00:13.0 00:08:44.859 Attached to 0000:00:10.0 00:08:44.859 Attached to 0000:00:11.0 00:08:44.859 Attached to 0000:00:12.0 00:08:44.859 0000:00:10.0: build_io_request_2 test passed 00:08:44.859 0000:00:10.0: build_io_request_4 test passed 00:08:44.859 0000:00:10.0: build_io_request_5 test passed 00:08:44.859 0000:00:10.0: build_io_request_6 test passed 00:08:44.859 0000:00:10.0: build_io_request_7 test passed 00:08:44.859 0000:00:10.0: build_io_request_10 test passed 00:08:44.859 0000:00:11.0: build_io_request_2 test passed 00:08:44.859 0000:00:11.0: build_io_request_4 test passed 00:08:44.859 0000:00:11.0: build_io_request_5 test passed 00:08:44.859 0000:00:11.0: build_io_request_6 test passed 00:08:44.859 0000:00:11.0: build_io_request_7 test passed 00:08:44.859 0000:00:11.0: build_io_request_10 test passed 00:08:44.859 Cleaning up... 00:08:44.859 00:08:44.859 real 0m0.295s 00:08:44.859 user 0m0.153s 00:08:44.859 sys 0m0.094s 00:08:44.859 04:25:41 nvme.nvme_sgl -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:44.859 ************************************ 00:08:44.859 END TEST nvme_sgl 00:08:44.859 04:25:41 nvme.nvme_sgl -- common/autotest_common.sh@10 -- # set +x 00:08:44.859 ************************************ 00:08:44.859 04:25:41 nvme -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:08:44.859 04:25:41 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:44.859 04:25:41 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:44.859 04:25:41 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:44.859 ************************************ 00:08:44.859 START TEST nvme_e2edp 00:08:44.859 ************************************ 00:08:44.859 04:25:41 nvme.nvme_e2edp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:08:45.121 NVMe Write/Read with End-to-End data protection test 00:08:45.121 Attached to 0000:00:13.0 00:08:45.121 Attached to 0000:00:10.0 00:08:45.121 Attached to 0000:00:11.0 00:08:45.121 Attached to 0000:00:12.0 00:08:45.121 Cleaning up... 00:08:45.121 00:08:45.121 real 0m0.202s 00:08:45.121 user 0m0.071s 00:08:45.121 sys 0m0.090s 00:08:45.121 04:25:41 nvme.nvme_e2edp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:45.121 04:25:41 nvme.nvme_e2edp -- common/autotest_common.sh@10 -- # set +x 00:08:45.121 ************************************ 00:08:45.121 END TEST nvme_e2edp 00:08:45.121 ************************************ 00:08:45.121 04:25:41 nvme -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:08:45.121 04:25:41 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:45.121 04:25:41 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:45.121 04:25:41 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:45.121 ************************************ 00:08:45.121 START TEST nvme_reserve 00:08:45.121 ************************************ 00:08:45.121 04:25:41 nvme.nvme_reserve -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:08:45.383 ===================================================== 00:08:45.383 NVMe Controller at PCI bus 0, device 19, function 0 00:08:45.383 ===================================================== 00:08:45.383 Reservations: Not Supported 00:08:45.383 ===================================================== 00:08:45.383 NVMe Controller at PCI bus 0, device 16, function 0 00:08:45.383 ===================================================== 00:08:45.383 Reservations: Not Supported 00:08:45.383 ===================================================== 00:08:45.383 NVMe Controller at PCI bus 0, device 17, function 0 00:08:45.383 ===================================================== 00:08:45.383 Reservations: Not Supported 00:08:45.383 ===================================================== 00:08:45.383 NVMe Controller at PCI bus 0, device 18, function 0 00:08:45.383 ===================================================== 00:08:45.383 Reservations: Not Supported 00:08:45.383 Reservation test passed 00:08:45.383 00:08:45.383 real 0m0.182s 00:08:45.383 user 0m0.066s 00:08:45.383 sys 0m0.086s 00:08:45.383 04:25:41 nvme.nvme_reserve -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:45.383 04:25:41 nvme.nvme_reserve -- common/autotest_common.sh@10 -- # set +x 00:08:45.383 ************************************ 00:08:45.383 END TEST nvme_reserve 00:08:45.383 ************************************ 00:08:45.383 04:25:41 nvme -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:08:45.383 04:25:41 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:45.383 04:25:41 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:45.383 04:25:41 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:45.383 ************************************ 00:08:45.383 START TEST nvme_err_injection 00:08:45.383 ************************************ 00:08:45.383 04:25:41 nvme.nvme_err_injection -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:08:45.644 NVMe Error Injection test 00:08:45.644 Attached to 0000:00:13.0 00:08:45.644 Attached to 0000:00:10.0 00:08:45.644 Attached to 0000:00:11.0 00:08:45.644 Attached to 0000:00:12.0 00:08:45.644 0000:00:13.0: get features failed as expected 00:08:45.644 0000:00:10.0: get features failed as expected 00:08:45.644 0000:00:11.0: get features failed as expected 00:08:45.644 0000:00:12.0: get features failed as expected 00:08:45.644 0000:00:13.0: get features successfully as expected 00:08:45.644 0000:00:10.0: get features successfully as expected 00:08:45.644 0000:00:11.0: get features successfully as expected 00:08:45.644 0000:00:12.0: get features successfully as expected 00:08:45.644 0000:00:13.0: read failed as expected 00:08:45.644 0000:00:10.0: read failed as expected 00:08:45.644 0000:00:11.0: read failed as expected 00:08:45.644 0000:00:12.0: read failed as expected 00:08:45.644 0000:00:13.0: read successfully as expected 00:08:45.644 0000:00:10.0: read successfully as expected 00:08:45.644 0000:00:11.0: read successfully as expected 00:08:45.644 0000:00:12.0: read successfully as expected 00:08:45.644 Cleaning up... 00:08:45.644 00:08:45.644 real 0m0.233s 00:08:45.644 user 0m0.081s 00:08:45.644 sys 0m0.102s 00:08:45.644 04:25:42 nvme.nvme_err_injection -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:45.644 04:25:42 nvme.nvme_err_injection -- common/autotest_common.sh@10 -- # set +x 00:08:45.644 ************************************ 00:08:45.644 END TEST nvme_err_injection 00:08:45.644 ************************************ 00:08:45.644 04:25:42 nvme -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:08:45.645 04:25:42 nvme -- common/autotest_common.sh@1105 -- # '[' 9 -le 1 ']' 00:08:45.645 04:25:42 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:45.645 04:25:42 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:45.645 ************************************ 00:08:45.645 START TEST nvme_overhead 00:08:45.645 ************************************ 00:08:45.645 04:25:42 nvme.nvme_overhead -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:08:47.028 Initializing NVMe Controllers 00:08:47.028 Attached to 0000:00:13.0 00:08:47.028 Attached to 0000:00:10.0 00:08:47.028 Attached to 0000:00:11.0 00:08:47.028 Attached to 0000:00:12.0 00:08:47.028 Initialization complete. Launching workers. 00:08:47.028 submit (in ns) avg, min, max = 11511.9, 10710.8, 88603.1 00:08:47.028 complete (in ns) avg, min, max = 7779.3, 7243.1, 61863.8 00:08:47.028 00:08:47.028 Submit histogram 00:08:47.028 ================ 00:08:47.028 Range in us Cumulative Count 00:08:47.028 10.683 - 10.732: 0.0077% ( 1) 00:08:47.028 10.782 - 10.831: 0.1230% ( 15) 00:08:47.028 10.831 - 10.880: 0.7536% ( 82) 00:08:47.028 10.880 - 10.929: 2.8604% ( 274) 00:08:47.028 10.929 - 10.978: 7.8047% ( 643) 00:08:47.028 10.978 - 11.028: 15.8016% ( 1040) 00:08:47.028 11.028 - 11.077: 26.2514% ( 1359) 00:08:47.028 11.077 - 11.126: 35.9246% ( 1258) 00:08:47.028 11.126 - 11.175: 44.0830% ( 1061) 00:08:47.028 11.175 - 11.225: 50.7343% ( 865) 00:08:47.028 11.225 - 11.274: 56.6705% ( 772) 00:08:47.028 11.274 - 11.323: 62.4606% ( 753) 00:08:47.028 11.323 - 11.372: 68.4352% ( 777) 00:08:47.028 11.372 - 11.422: 74.4329% ( 780) 00:08:47.028 11.422 - 11.471: 79.3618% ( 641) 00:08:47.028 11.471 - 11.520: 82.8220% ( 450) 00:08:47.028 11.520 - 11.569: 84.7520% ( 251) 00:08:47.028 11.569 - 11.618: 85.7824% ( 134) 00:08:47.028 11.618 - 11.668: 86.5667% ( 102) 00:08:47.028 11.668 - 11.717: 87.1665% ( 78) 00:08:47.028 11.717 - 11.766: 87.6432% ( 62) 00:08:47.028 11.766 - 11.815: 88.1430% ( 65) 00:08:47.028 11.815 - 11.865: 88.8428% ( 91) 00:08:47.028 11.865 - 11.914: 89.4656% ( 81) 00:08:47.028 11.914 - 11.963: 90.2884% ( 107) 00:08:47.028 11.963 - 12.012: 91.1034% ( 106) 00:08:47.028 12.012 - 12.062: 92.0492% ( 123) 00:08:47.028 12.062 - 12.111: 92.8950% ( 110) 00:08:47.028 12.111 - 12.160: 93.5640% ( 87) 00:08:47.028 12.160 - 12.209: 94.1561% ( 77) 00:08:47.028 12.209 - 12.258: 94.6943% ( 70) 00:08:47.028 12.258 - 12.308: 95.0481% ( 46) 00:08:47.028 12.308 - 12.357: 95.2095% ( 21) 00:08:47.028 12.357 - 12.406: 95.3864% ( 23) 00:08:47.028 12.406 - 12.455: 95.5556% ( 22) 00:08:47.028 12.455 - 12.505: 95.6478% ( 12) 00:08:47.028 12.505 - 12.554: 95.7170% ( 9) 00:08:47.028 12.554 - 12.603: 95.7862% ( 9) 00:08:47.028 12.603 - 12.702: 95.8401% ( 7) 00:08:47.028 12.702 - 12.800: 95.9016% ( 8) 00:08:47.028 12.800 - 12.898: 95.9862% ( 11) 00:08:47.028 12.898 - 12.997: 96.2092% ( 29) 00:08:47.028 12.997 - 13.095: 96.4321% ( 29) 00:08:47.028 13.095 - 13.194: 96.6244% ( 25) 00:08:47.028 13.194 - 13.292: 96.7628% ( 18) 00:08:47.028 13.292 - 13.391: 96.8089% ( 6) 00:08:47.028 13.391 - 13.489: 96.9243% ( 15) 00:08:47.028 13.489 - 13.588: 96.9704% ( 6) 00:08:47.028 13.588 - 13.686: 97.0088% ( 5) 00:08:47.028 13.686 - 13.785: 97.0165% ( 1) 00:08:47.028 13.785 - 13.883: 97.0627% ( 6) 00:08:47.028 13.883 - 13.982: 97.0934% ( 4) 00:08:47.028 13.982 - 14.080: 97.1319% ( 5) 00:08:47.028 14.080 - 14.178: 97.1857% ( 7) 00:08:47.028 14.178 - 14.277: 97.2011% ( 2) 00:08:47.028 14.277 - 14.375: 97.2088% ( 1) 00:08:47.028 14.474 - 14.572: 97.2395% ( 4) 00:08:47.028 14.572 - 14.671: 97.2933% ( 7) 00:08:47.028 14.671 - 14.769: 97.3318% ( 5) 00:08:47.028 14.769 - 14.868: 97.3626% ( 4) 00:08:47.028 14.868 - 14.966: 97.4010% ( 5) 00:08:47.028 14.966 - 15.065: 97.4394% ( 5) 00:08:47.028 15.065 - 15.163: 97.4625% ( 3) 00:08:47.028 15.163 - 15.262: 97.4702% ( 1) 00:08:47.028 15.262 - 15.360: 97.4856% ( 2) 00:08:47.028 15.458 - 15.557: 97.5163% ( 4) 00:08:47.028 15.557 - 15.655: 97.5317% ( 2) 00:08:47.028 15.655 - 15.754: 97.5394% ( 1) 00:08:47.028 15.754 - 15.852: 97.5548% ( 2) 00:08:47.028 15.852 - 15.951: 97.5855% ( 4) 00:08:47.028 15.951 - 16.049: 97.6009% ( 2) 00:08:47.028 16.148 - 16.246: 97.6163% ( 2) 00:08:47.028 16.246 - 16.345: 97.6394% ( 3) 00:08:47.028 16.345 - 16.443: 97.6547% ( 2) 00:08:47.028 16.443 - 16.542: 97.7009% ( 6) 00:08:47.028 16.542 - 16.640: 97.7547% ( 7) 00:08:47.028 16.640 - 16.738: 97.8162% ( 8) 00:08:47.028 16.738 - 16.837: 97.8393% ( 3) 00:08:47.028 16.837 - 16.935: 97.9162% ( 10) 00:08:47.028 16.935 - 17.034: 98.0392% ( 16) 00:08:47.028 17.034 - 17.132: 98.1853% ( 19) 00:08:47.028 17.132 - 17.231: 98.3160% ( 17) 00:08:47.028 17.231 - 17.329: 98.4006% ( 11) 00:08:47.028 17.329 - 17.428: 98.4929% ( 12) 00:08:47.028 17.428 - 17.526: 98.6005% ( 14) 00:08:47.028 17.526 - 17.625: 98.6544% ( 7) 00:08:47.028 17.625 - 17.723: 98.7389% ( 11) 00:08:47.028 17.723 - 17.822: 98.8082% ( 9) 00:08:47.028 17.822 - 17.920: 98.8543% ( 6) 00:08:47.028 17.920 - 18.018: 98.8774% ( 3) 00:08:47.028 18.018 - 18.117: 98.8850% ( 1) 00:08:47.028 18.117 - 18.215: 98.9004% ( 2) 00:08:47.028 18.215 - 18.314: 98.9389% ( 5) 00:08:47.028 18.314 - 18.412: 98.9773% ( 5) 00:08:47.028 18.412 - 18.511: 99.0850% ( 14) 00:08:47.028 18.511 - 18.609: 99.2080% ( 16) 00:08:47.028 18.609 - 18.708: 99.3925% ( 24) 00:08:47.028 18.708 - 18.806: 99.5156% ( 16) 00:08:47.028 18.806 - 18.905: 99.6155% ( 13) 00:08:47.028 18.905 - 19.003: 99.6463% ( 4) 00:08:47.028 19.003 - 19.102: 99.6617% ( 2) 00:08:47.028 19.102 - 19.200: 99.6770% ( 2) 00:08:47.028 19.200 - 19.298: 99.6847% ( 1) 00:08:47.028 19.298 - 19.397: 99.6924% ( 1) 00:08:47.028 19.397 - 19.495: 99.7001% ( 1) 00:08:47.028 19.495 - 19.594: 99.7078% ( 1) 00:08:47.028 19.594 - 19.692: 99.7155% ( 1) 00:08:47.028 19.692 - 19.791: 99.7232% ( 1) 00:08:47.028 19.791 - 19.889: 99.7309% ( 1) 00:08:47.028 19.889 - 19.988: 99.7386% ( 1) 00:08:47.028 19.988 - 20.086: 99.7539% ( 2) 00:08:47.028 20.283 - 20.382: 99.7616% ( 1) 00:08:47.028 20.480 - 20.578: 99.7847% ( 3) 00:08:47.028 20.775 - 20.874: 99.7924% ( 1) 00:08:47.028 20.972 - 21.071: 99.8001% ( 1) 00:08:47.028 21.366 - 21.465: 99.8078% ( 1) 00:08:47.028 21.465 - 21.563: 99.8155% ( 1) 00:08:47.028 21.563 - 21.662: 99.8231% ( 1) 00:08:47.029 21.662 - 21.760: 99.8308% ( 1) 00:08:47.029 22.154 - 22.252: 99.8385% ( 1) 00:08:47.029 22.351 - 22.449: 99.8539% ( 2) 00:08:47.029 22.449 - 22.548: 99.8616% ( 1) 00:08:47.029 22.646 - 22.745: 99.8693% ( 1) 00:08:47.029 23.631 - 23.729: 99.8770% ( 1) 00:08:47.029 23.729 - 23.828: 99.8847% ( 1) 00:08:47.029 25.994 - 26.191: 99.8923% ( 1) 00:08:47.029 28.357 - 28.554: 99.9000% ( 1) 00:08:47.029 28.948 - 29.145: 99.9077% ( 1) 00:08:47.029 30.129 - 30.326: 99.9154% ( 1) 00:08:47.029 33.477 - 33.674: 99.9231% ( 1) 00:08:47.029 35.446 - 35.643: 99.9308% ( 1) 00:08:47.029 36.234 - 36.431: 99.9385% ( 1) 00:08:47.029 39.188 - 39.385: 99.9462% ( 1) 00:08:47.029 39.778 - 39.975: 99.9616% ( 2) 00:08:47.029 43.323 - 43.520: 99.9692% ( 1) 00:08:47.029 51.594 - 51.988: 99.9769% ( 1) 00:08:47.029 56.320 - 56.714: 99.9846% ( 1) 00:08:47.029 61.046 - 61.440: 99.9923% ( 1) 00:08:47.029 88.222 - 88.615: 100.0000% ( 1) 00:08:47.029 00:08:47.029 Complete histogram 00:08:47.029 ================== 00:08:47.029 Range in us Cumulative Count 00:08:47.029 7.237 - 7.286: 0.9612% ( 125) 00:08:47.029 7.286 - 7.335: 9.5656% ( 1119) 00:08:47.029 7.335 - 7.385: 28.6967% ( 2488) 00:08:47.029 7.385 - 7.434: 44.1215% ( 2006) 00:08:47.029 7.434 - 7.483: 53.1872% ( 1179) 00:08:47.029 7.483 - 7.532: 57.1242% ( 512) 00:08:47.029 7.532 - 7.582: 59.5079% ( 310) 00:08:47.029 7.582 - 7.631: 61.1688% ( 216) 00:08:47.029 7.631 - 7.680: 62.1838% ( 132) 00:08:47.029 7.680 - 7.729: 62.6990% ( 67) 00:08:47.029 7.729 - 7.778: 63.0681% ( 48) 00:08:47.029 7.778 - 7.828: 63.8216% ( 98) 00:08:47.029 7.828 - 7.877: 68.2584% ( 577) 00:08:47.029 7.877 - 7.926: 77.6932% ( 1227) 00:08:47.029 7.926 - 7.975: 84.4675% ( 881) 00:08:47.029 7.975 - 8.025: 88.6428% ( 543) 00:08:47.029 8.025 - 8.074: 91.7416% ( 403) 00:08:47.029 8.074 - 8.123: 93.6486% ( 248) 00:08:47.029 8.123 - 8.172: 95.0327% ( 180) 00:08:47.029 8.172 - 8.222: 95.9477% ( 119) 00:08:47.029 8.222 - 8.271: 96.5629% ( 80) 00:08:47.029 8.271 - 8.320: 96.9243% ( 47) 00:08:47.029 8.320 - 8.369: 97.1242% ( 26) 00:08:47.029 8.369 - 8.418: 97.2241% ( 13) 00:08:47.029 8.418 - 8.468: 97.2857% ( 8) 00:08:47.029 8.468 - 8.517: 97.2933% ( 1) 00:08:47.029 8.517 - 8.566: 97.3241% ( 4) 00:08:47.029 8.566 - 8.615: 97.3395% ( 2) 00:08:47.029 8.615 - 8.665: 97.3549% ( 2) 00:08:47.029 8.665 - 8.714: 97.3933% ( 5) 00:08:47.029 8.714 - 8.763: 97.4394% ( 6) 00:08:47.029 8.763 - 8.812: 97.4702% ( 4) 00:08:47.029 8.812 - 8.862: 97.5087% ( 5) 00:08:47.029 8.862 - 8.911: 97.5317% ( 3) 00:08:47.029 8.911 - 8.960: 97.5394% ( 1) 00:08:47.029 9.009 - 9.058: 97.5471% ( 1) 00:08:47.029 9.058 - 9.108: 97.5702% ( 3) 00:08:47.029 9.108 - 9.157: 97.5855% ( 2) 00:08:47.029 9.206 - 9.255: 97.5932% ( 1) 00:08:47.029 9.255 - 9.305: 97.6009% ( 1) 00:08:47.029 9.305 - 9.354: 97.6086% ( 1) 00:08:47.029 9.452 - 9.502: 97.6240% ( 2) 00:08:47.029 9.502 - 9.551: 97.6394% ( 2) 00:08:47.029 9.551 - 9.600: 97.6471% ( 1) 00:08:47.029 9.600 - 9.649: 97.6547% ( 1) 00:08:47.029 9.748 - 9.797: 97.6624% ( 1) 00:08:47.029 10.043 - 10.092: 97.6701% ( 1) 00:08:47.029 10.142 - 10.191: 97.6778% ( 1) 00:08:47.029 10.191 - 10.240: 97.6855% ( 1) 00:08:47.029 10.683 - 10.732: 97.6932% ( 1) 00:08:47.029 10.978 - 11.028: 97.7009% ( 1) 00:08:47.029 11.126 - 11.175: 97.7086% ( 1) 00:08:47.029 11.225 - 11.274: 97.7163% ( 1) 00:08:47.029 11.274 - 11.323: 97.7240% ( 1) 00:08:47.029 11.372 - 11.422: 97.7316% ( 1) 00:08:47.029 11.471 - 11.520: 97.7393% ( 1) 00:08:47.029 11.520 - 11.569: 97.7470% ( 1) 00:08:47.029 11.569 - 11.618: 97.7547% ( 1) 00:08:47.029 11.668 - 11.717: 97.7778% ( 3) 00:08:47.029 11.717 - 11.766: 97.7855% ( 1) 00:08:47.029 11.766 - 11.815: 97.8008% ( 2) 00:08:47.029 11.865 - 11.914: 97.8085% ( 1) 00:08:47.029 11.963 - 12.012: 97.8162% ( 1) 00:08:47.029 12.062 - 12.111: 97.8316% ( 2) 00:08:47.029 12.111 - 12.160: 97.8470% ( 2) 00:08:47.029 12.258 - 12.308: 97.8624% ( 2) 00:08:47.029 12.308 - 12.357: 97.8700% ( 1) 00:08:47.029 12.357 - 12.406: 97.8777% ( 1) 00:08:47.029 12.406 - 12.455: 97.8854% ( 1) 00:08:47.029 12.455 - 12.505: 97.8931% ( 1) 00:08:47.029 12.505 - 12.554: 97.9085% ( 2) 00:08:47.029 12.603 - 12.702: 97.9316% ( 3) 00:08:47.029 12.702 - 12.800: 97.9393% ( 1) 00:08:47.029 12.800 - 12.898: 97.9623% ( 3) 00:08:47.029 12.898 - 12.997: 98.0238% ( 8) 00:08:47.029 12.997 - 13.095: 98.0777% ( 7) 00:08:47.029 13.095 - 13.194: 98.1622% ( 11) 00:08:47.029 13.194 - 13.292: 98.3237% ( 21) 00:08:47.029 13.292 - 13.391: 98.5621% ( 31) 00:08:47.029 13.391 - 13.489: 98.8927% ( 43) 00:08:47.029 13.489 - 13.588: 99.1234% ( 30) 00:08:47.029 13.588 - 13.686: 99.2311% ( 14) 00:08:47.029 13.686 - 13.785: 99.3156% ( 11) 00:08:47.029 13.785 - 13.883: 99.3618% ( 6) 00:08:47.029 13.883 - 13.982: 99.4541% ( 12) 00:08:47.029 13.982 - 14.080: 99.4925% ( 5) 00:08:47.029 14.080 - 14.178: 99.5617% ( 9) 00:08:47.029 14.178 - 14.277: 99.5848% ( 3) 00:08:47.029 14.277 - 14.375: 99.6002% ( 2) 00:08:47.029 14.375 - 14.474: 99.6155% ( 2) 00:08:47.029 14.474 - 14.572: 99.6309% ( 2) 00:08:47.029 14.572 - 14.671: 99.6386% ( 1) 00:08:47.029 14.769 - 14.868: 99.6540% ( 2) 00:08:47.029 14.868 - 14.966: 99.6694% ( 2) 00:08:47.029 15.065 - 15.163: 99.6770% ( 1) 00:08:47.029 15.458 - 15.557: 99.6847% ( 1) 00:08:47.029 15.557 - 15.655: 99.7001% ( 2) 00:08:47.029 15.655 - 15.754: 99.7155% ( 2) 00:08:47.029 15.852 - 15.951: 99.7309% ( 2) 00:08:47.029 15.951 - 16.049: 99.7386% ( 1) 00:08:47.029 16.049 - 16.148: 99.7463% ( 1) 00:08:47.029 16.246 - 16.345: 99.7539% ( 1) 00:08:47.029 16.345 - 16.443: 99.7616% ( 1) 00:08:47.029 16.837 - 16.935: 99.7693% ( 1) 00:08:47.029 17.034 - 17.132: 99.7770% ( 1) 00:08:47.029 17.231 - 17.329: 99.7847% ( 1) 00:08:47.029 17.526 - 17.625: 99.7924% ( 1) 00:08:47.029 17.822 - 17.920: 99.8001% ( 1) 00:08:47.029 18.117 - 18.215: 99.8078% ( 1) 00:08:47.029 18.215 - 18.314: 99.8155% ( 1) 00:08:47.029 18.511 - 18.609: 99.8308% ( 2) 00:08:47.029 18.609 - 18.708: 99.8385% ( 1) 00:08:47.029 19.003 - 19.102: 99.8462% ( 1) 00:08:47.029 19.988 - 20.086: 99.8539% ( 1) 00:08:47.029 20.382 - 20.480: 99.8616% ( 1) 00:08:47.029 20.874 - 20.972: 99.8770% ( 2) 00:08:47.029 21.268 - 21.366: 99.8847% ( 1) 00:08:47.029 21.563 - 21.662: 99.8923% ( 1) 00:08:47.029 23.040 - 23.138: 99.9000% ( 1) 00:08:47.029 23.237 - 23.335: 99.9077% ( 1) 00:08:47.030 23.532 - 23.631: 99.9154% ( 1) 00:08:47.030 24.222 - 24.320: 99.9231% ( 1) 00:08:47.030 25.403 - 25.600: 99.9308% ( 1) 00:08:47.030 25.600 - 25.797: 99.9385% ( 1) 00:08:47.030 29.932 - 30.129: 99.9462% ( 1) 00:08:47.030 36.431 - 36.628: 99.9539% ( 1) 00:08:47.030 37.612 - 37.809: 99.9616% ( 1) 00:08:47.030 40.566 - 40.763: 99.9692% ( 1) 00:08:47.030 43.717 - 43.914: 99.9769% ( 1) 00:08:47.030 46.474 - 46.671: 99.9846% ( 1) 00:08:47.030 47.852 - 48.049: 99.9923% ( 1) 00:08:47.030 61.834 - 62.228: 100.0000% ( 1) 00:08:47.030 00:08:47.030 00:08:47.030 real 0m1.220s 00:08:47.030 user 0m1.077s 00:08:47.030 sys 0m0.092s 00:08:47.030 04:25:43 nvme.nvme_overhead -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:47.030 04:25:43 nvme.nvme_overhead -- common/autotest_common.sh@10 -- # set +x 00:08:47.030 ************************************ 00:08:47.030 END TEST nvme_overhead 00:08:47.030 ************************************ 00:08:47.030 04:25:43 nvme -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:08:47.030 04:25:43 nvme -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:08:47.030 04:25:43 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:47.030 04:25:43 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:47.030 ************************************ 00:08:47.030 START TEST nvme_arbitration 00:08:47.030 ************************************ 00:08:47.030 04:25:43 nvme.nvme_arbitration -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:08:50.332 Initializing NVMe Controllers 00:08:50.332 Attached to 0000:00:13.0 00:08:50.332 Attached to 0000:00:10.0 00:08:50.332 Attached to 0000:00:11.0 00:08:50.332 Attached to 0000:00:12.0 00:08:50.332 Associating QEMU NVMe Ctrl (12343 ) with lcore 0 00:08:50.332 Associating QEMU NVMe Ctrl (12340 ) with lcore 1 00:08:50.332 Associating QEMU NVMe Ctrl (12341 ) with lcore 2 00:08:50.332 Associating QEMU NVMe Ctrl (12342 ) with lcore 3 00:08:50.332 Associating QEMU NVMe Ctrl (12342 ) with lcore 0 00:08:50.332 Associating QEMU NVMe Ctrl (12342 ) with lcore 1 00:08:50.332 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:08:50.332 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:08:50.332 Initialization complete. Launching workers. 00:08:50.332 Starting thread on core 1 with urgent priority queue 00:08:50.332 Starting thread on core 2 with urgent priority queue 00:08:50.332 Starting thread on core 3 with urgent priority queue 00:08:50.332 Starting thread on core 0 with urgent priority queue 00:08:50.332 QEMU NVMe Ctrl (12343 ) core 0: 853.33 IO/s 117.19 secs/100000 ios 00:08:50.332 QEMU NVMe Ctrl (12342 ) core 0: 853.33 IO/s 117.19 secs/100000 ios 00:08:50.332 QEMU NVMe Ctrl (12340 ) core 1: 874.67 IO/s 114.33 secs/100000 ios 00:08:50.332 QEMU NVMe Ctrl (12342 ) core 1: 874.67 IO/s 114.33 secs/100000 ios 00:08:50.332 QEMU NVMe Ctrl (12341 ) core 2: 874.67 IO/s 114.33 secs/100000 ios 00:08:50.332 QEMU NVMe Ctrl (12342 ) core 3: 874.67 IO/s 114.33 secs/100000 ios 00:08:50.332 ======================================================== 00:08:50.332 00:08:50.332 00:08:50.332 real 0m3.315s 00:08:50.332 user 0m9.248s 00:08:50.332 sys 0m0.114s 00:08:50.332 04:25:46 nvme.nvme_arbitration -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:50.332 04:25:46 nvme.nvme_arbitration -- common/autotest_common.sh@10 -- # set +x 00:08:50.332 ************************************ 00:08:50.332 END TEST nvme_arbitration 00:08:50.332 ************************************ 00:08:50.332 04:25:46 nvme -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:08:50.332 04:25:46 nvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:50.332 04:25:46 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:50.332 04:25:46 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:50.332 ************************************ 00:08:50.332 START TEST nvme_single_aen 00:08:50.332 ************************************ 00:08:50.332 04:25:46 nvme.nvme_single_aen -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:08:50.593 Asynchronous Event Request test 00:08:50.593 Attached to 0000:00:13.0 00:08:50.593 Attached to 0000:00:10.0 00:08:50.593 Attached to 0000:00:11.0 00:08:50.593 Attached to 0000:00:12.0 00:08:50.593 Reset controller to setup AER completions for this process 00:08:50.593 Registering asynchronous event callbacks... 00:08:50.593 Getting orig temperature thresholds of all controllers 00:08:50.593 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:50.593 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:50.593 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:50.593 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:50.593 Setting all controllers temperature threshold low to trigger AER 00:08:50.593 Waiting for all controllers temperature threshold to be set lower 00:08:50.593 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:50.593 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:08:50.593 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:50.593 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:08:50.593 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:50.593 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:08:50.593 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:50.593 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:08:50.593 Waiting for all controllers to trigger AER and reset threshold 00:08:50.593 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:50.593 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:50.593 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:50.593 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:50.593 Cleaning up... 00:08:50.593 00:08:50.593 real 0m0.227s 00:08:50.593 user 0m0.081s 00:08:50.593 sys 0m0.098s 00:08:50.593 04:25:46 nvme.nvme_single_aen -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:50.593 04:25:46 nvme.nvme_single_aen -- common/autotest_common.sh@10 -- # set +x 00:08:50.593 ************************************ 00:08:50.593 END TEST nvme_single_aen 00:08:50.593 ************************************ 00:08:50.593 04:25:47 nvme -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:08:50.593 04:25:47 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:50.593 04:25:47 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:50.593 04:25:47 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:50.593 ************************************ 00:08:50.593 START TEST nvme_doorbell_aers 00:08:50.593 ************************************ 00:08:50.593 04:25:47 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1129 -- # nvme_doorbell_aers 00:08:50.593 04:25:47 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # bdfs=() 00:08:50.593 04:25:47 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # local bdfs bdf 00:08:50.593 04:25:47 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:08:50.593 04:25:47 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:08:50.593 04:25:47 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # bdfs=() 00:08:50.593 04:25:47 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # local bdfs 00:08:50.593 04:25:47 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:50.593 04:25:47 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:08:50.593 04:25:47 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:50.593 04:25:47 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:08:50.593 04:25:47 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:08:50.593 04:25:47 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:08:50.593 04:25:47 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:10.0' 00:08:50.855 [2024-11-27 04:25:47.296596] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63214) is not found. Dropping the request. 00:09:01.013 Executing: test_write_invalid_db 00:09:01.014 Waiting for AER completion... 00:09:01.014 Failure: test_write_invalid_db 00:09:01.014 00:09:01.014 Executing: test_invalid_db_write_overflow_sq 00:09:01.014 Waiting for AER completion... 00:09:01.014 Failure: test_invalid_db_write_overflow_sq 00:09:01.014 00:09:01.014 Executing: test_invalid_db_write_overflow_cq 00:09:01.014 Waiting for AER completion... 00:09:01.014 Failure: test_invalid_db_write_overflow_cq 00:09:01.014 00:09:01.014 04:25:57 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:09:01.014 04:25:57 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:11.0' 00:09:01.014 [2024-11-27 04:25:57.329135] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63214) is not found. Dropping the request. 00:09:11.091 Executing: test_write_invalid_db 00:09:11.091 Waiting for AER completion... 00:09:11.091 Failure: test_write_invalid_db 00:09:11.091 00:09:11.091 Executing: test_invalid_db_write_overflow_sq 00:09:11.091 Waiting for AER completion... 00:09:11.091 Failure: test_invalid_db_write_overflow_sq 00:09:11.091 00:09:11.092 Executing: test_invalid_db_write_overflow_cq 00:09:11.092 Waiting for AER completion... 00:09:11.092 Failure: test_invalid_db_write_overflow_cq 00:09:11.092 00:09:11.092 04:26:07 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:09:11.092 04:26:07 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:12.0' 00:09:11.092 [2024-11-27 04:26:07.374095] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63214) is not found. Dropping the request. 00:09:21.181 Executing: test_write_invalid_db 00:09:21.181 Waiting for AER completion... 00:09:21.181 Failure: test_write_invalid_db 00:09:21.181 00:09:21.181 Executing: test_invalid_db_write_overflow_sq 00:09:21.181 Waiting for AER completion... 00:09:21.181 Failure: test_invalid_db_write_overflow_sq 00:09:21.181 00:09:21.181 Executing: test_invalid_db_write_overflow_cq 00:09:21.181 Waiting for AER completion... 00:09:21.181 Failure: test_invalid_db_write_overflow_cq 00:09:21.181 00:09:21.181 04:26:17 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:09:21.181 04:26:17 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:13.0' 00:09:21.181 [2024-11-27 04:26:17.402278] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63214) is not found. Dropping the request. 00:09:31.187 Executing: test_write_invalid_db 00:09:31.187 Waiting for AER completion... 00:09:31.187 Failure: test_write_invalid_db 00:09:31.187 00:09:31.187 Executing: test_invalid_db_write_overflow_sq 00:09:31.187 Waiting for AER completion... 00:09:31.187 Failure: test_invalid_db_write_overflow_sq 00:09:31.187 00:09:31.187 Executing: test_invalid_db_write_overflow_cq 00:09:31.187 Waiting for AER completion... 00:09:31.187 Failure: test_invalid_db_write_overflow_cq 00:09:31.187 00:09:31.187 00:09:31.187 real 0m40.180s 00:09:31.187 user 0m34.217s 00:09:31.187 sys 0m5.573s 00:09:31.187 04:26:27 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:31.187 04:26:27 nvme.nvme_doorbell_aers -- common/autotest_common.sh@10 -- # set +x 00:09:31.187 ************************************ 00:09:31.187 END TEST nvme_doorbell_aers 00:09:31.187 ************************************ 00:09:31.187 04:26:27 nvme -- nvme/nvme.sh@97 -- # uname 00:09:31.187 04:26:27 nvme -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:09:31.187 04:26:27 nvme -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:09:31.187 04:26:27 nvme -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:09:31.187 04:26:27 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:31.187 04:26:27 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:31.187 ************************************ 00:09:31.187 START TEST nvme_multi_aen 00:09:31.187 ************************************ 00:09:31.187 04:26:27 nvme.nvme_multi_aen -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:09:31.187 [2024-11-27 04:26:27.431292] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63214) is not found. Dropping the request. 00:09:31.187 [2024-11-27 04:26:27.431699] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63214) is not found. Dropping the request. 00:09:31.187 [2024-11-27 04:26:27.431769] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63214) is not found. Dropping the request. 00:09:31.187 [2024-11-27 04:26:27.433346] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63214) is not found. Dropping the request. 00:09:31.187 [2024-11-27 04:26:27.433442] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63214) is not found. Dropping the request. 00:09:31.187 [2024-11-27 04:26:27.433480] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63214) is not found. Dropping the request. 00:09:31.187 [2024-11-27 04:26:27.434522] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63214) is not found. Dropping the request. 00:09:31.187 [2024-11-27 04:26:27.434591] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63214) is not found. Dropping the request. 00:09:31.187 [2024-11-27 04:26:27.434624] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63214) is not found. Dropping the request. 00:09:31.187 [2024-11-27 04:26:27.435636] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63214) is not found. Dropping the request. 00:09:31.187 [2024-11-27 04:26:27.435703] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63214) is not found. Dropping the request. 00:09:31.187 [2024-11-27 04:26:27.435746] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63214) is not found. Dropping the request. 00:09:31.187 Child process pid: 63740 00:09:31.187 [Child] Asynchronous Event Request test 00:09:31.187 [Child] Attached to 0000:00:13.0 00:09:31.187 [Child] Attached to 0000:00:10.0 00:09:31.187 [Child] Attached to 0000:00:11.0 00:09:31.187 [Child] Attached to 0000:00:12.0 00:09:31.187 [Child] Registering asynchronous event callbacks... 00:09:31.187 [Child] Getting orig temperature thresholds of all controllers 00:09:31.187 [Child] 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:31.187 [Child] 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:31.187 [Child] 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:31.187 [Child] 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:31.187 [Child] Waiting for all controllers to trigger AER and reset threshold 00:09:31.187 [Child] 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:31.187 [Child] 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:31.187 [Child] 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:31.187 [Child] 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:31.187 [Child] 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:31.187 [Child] 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:31.187 [Child] 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:31.187 [Child] 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:31.187 [Child] Cleaning up... 00:09:31.187 Asynchronous Event Request test 00:09:31.187 Attached to 0000:00:13.0 00:09:31.187 Attached to 0000:00:10.0 00:09:31.187 Attached to 0000:00:11.0 00:09:31.187 Attached to 0000:00:12.0 00:09:31.187 Reset controller to setup AER completions for this process 00:09:31.187 Registering asynchronous event callbacks... 00:09:31.187 Getting orig temperature thresholds of all controllers 00:09:31.187 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:31.187 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:31.187 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:31.187 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:31.187 Setting all controllers temperature threshold low to trigger AER 00:09:31.187 Waiting for all controllers temperature threshold to be set lower 00:09:31.187 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:31.187 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:09:31.187 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:31.187 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:09:31.187 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:31.187 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:09:31.187 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:31.187 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:09:31.187 Waiting for all controllers to trigger AER and reset threshold 00:09:31.187 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:31.187 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:31.187 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:31.187 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:31.187 Cleaning up... 00:09:31.187 00:09:31.187 real 0m0.431s 00:09:31.187 user 0m0.143s 00:09:31.187 sys 0m0.181s 00:09:31.187 04:26:27 nvme.nvme_multi_aen -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:31.187 04:26:27 nvme.nvme_multi_aen -- common/autotest_common.sh@10 -- # set +x 00:09:31.187 ************************************ 00:09:31.187 END TEST nvme_multi_aen 00:09:31.187 ************************************ 00:09:31.187 04:26:27 nvme -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:09:31.187 04:26:27 nvme -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:31.187 04:26:27 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:31.187 04:26:27 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:31.187 ************************************ 00:09:31.187 START TEST nvme_startup 00:09:31.188 ************************************ 00:09:31.188 04:26:27 nvme.nvme_startup -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:09:31.447 Initializing NVMe Controllers 00:09:31.447 Attached to 0000:00:13.0 00:09:31.447 Attached to 0000:00:10.0 00:09:31.447 Attached to 0000:00:11.0 00:09:31.447 Attached to 0000:00:12.0 00:09:31.447 Initialization complete. 00:09:31.447 Time used:158911.953 (us). 00:09:31.447 00:09:31.447 real 0m0.211s 00:09:31.447 user 0m0.066s 00:09:31.447 sys 0m0.092s 00:09:31.447 04:26:27 nvme.nvme_startup -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:31.447 04:26:27 nvme.nvme_startup -- common/autotest_common.sh@10 -- # set +x 00:09:31.447 ************************************ 00:09:31.447 END TEST nvme_startup 00:09:31.447 ************************************ 00:09:31.447 04:26:27 nvme -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:09:31.447 04:26:27 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:31.447 04:26:27 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:31.447 04:26:27 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:31.447 ************************************ 00:09:31.447 START TEST nvme_multi_secondary 00:09:31.447 ************************************ 00:09:31.447 04:26:27 nvme.nvme_multi_secondary -- common/autotest_common.sh@1129 -- # nvme_multi_secondary 00:09:31.447 04:26:27 nvme.nvme_multi_secondary -- nvme/nvme.sh@52 -- # pid0=63791 00:09:31.447 04:26:27 nvme.nvme_multi_secondary -- nvme/nvme.sh@54 -- # pid1=63792 00:09:31.447 04:26:27 nvme.nvme_multi_secondary -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:09:31.447 04:26:27 nvme.nvme_multi_secondary -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:09:31.447 04:26:27 nvme.nvme_multi_secondary -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:09:34.758 Initializing NVMe Controllers 00:09:34.758 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:34.758 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:34.758 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:34.758 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:34.758 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:09:34.758 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:09:34.758 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:09:34.758 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:09:34.758 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:09:34.758 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:09:34.758 Initialization complete. Launching workers. 00:09:34.758 ======================================================== 00:09:34.758 Latency(us) 00:09:34.758 Device Information : IOPS MiB/s Average min max 00:09:34.758 PCIE (0000:00:13.0) NSID 1 from core 1: 5177.12 20.22 3090.03 840.17 10446.68 00:09:34.758 PCIE (0000:00:10.0) NSID 1 from core 1: 5177.12 20.22 3089.26 835.80 10368.09 00:09:34.758 PCIE (0000:00:11.0) NSID 1 from core 1: 5177.12 20.22 3090.50 933.28 10254.35 00:09:34.758 PCIE (0000:00:12.0) NSID 1 from core 1: 5177.12 20.22 3090.60 943.96 9480.10 00:09:34.758 PCIE (0000:00:12.0) NSID 2 from core 1: 5177.12 20.22 3090.85 895.30 11623.87 00:09:34.758 PCIE (0000:00:12.0) NSID 3 from core 1: 5177.12 20.22 3091.07 832.10 10817.76 00:09:34.758 ======================================================== 00:09:34.758 Total : 31062.73 121.34 3090.39 832.10 11623.87 00:09:34.758 00:09:34.758 Initializing NVMe Controllers 00:09:34.758 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:34.758 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:34.758 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:34.758 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:34.758 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:09:34.758 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:09:34.758 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:09:34.758 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:09:34.758 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:09:34.758 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:09:34.758 Initialization complete. Launching workers. 00:09:34.758 ======================================================== 00:09:34.758 Latency(us) 00:09:34.758 Device Information : IOPS MiB/s Average min max 00:09:34.758 PCIE (0000:00:13.0) NSID 1 from core 2: 2096.47 8.19 7631.49 1988.02 15769.24 00:09:34.758 PCIE (0000:00:10.0) NSID 1 from core 2: 2096.47 8.19 7630.13 1991.12 18410.48 00:09:34.758 PCIE (0000:00:11.0) NSID 1 from core 2: 2096.47 8.19 7631.75 2014.99 17515.57 00:09:34.758 PCIE (0000:00:12.0) NSID 1 from core 2: 2096.47 8.19 7631.82 2071.86 17064.88 00:09:34.758 PCIE (0000:00:12.0) NSID 2 from core 2: 2096.47 8.19 7631.88 2072.21 17775.48 00:09:34.758 PCIE (0000:00:12.0) NSID 3 from core 2: 2096.47 8.19 7632.71 1452.83 14166.21 00:09:34.758 ======================================================== 00:09:34.758 Total : 12578.80 49.14 7631.63 1452.83 18410.48 00:09:34.758 00:09:35.019 04:26:31 nvme.nvme_multi_secondary -- nvme/nvme.sh@56 -- # wait 63791 00:09:36.935 Initializing NVMe Controllers 00:09:36.935 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:36.935 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:36.935 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:36.935 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:36.935 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:09:36.935 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:09:36.935 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:09:36.935 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:09:36.935 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:09:36.935 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:09:36.935 Initialization complete. Launching workers. 00:09:36.935 ======================================================== 00:09:36.935 Latency(us) 00:09:36.935 Device Information : IOPS MiB/s Average min max 00:09:36.935 PCIE (0000:00:13.0) NSID 1 from core 0: 7990.14 31.21 2002.06 747.32 7511.00 00:09:36.935 PCIE (0000:00:10.0) NSID 1 from core 0: 7990.14 31.21 2001.12 720.65 7328.03 00:09:36.935 PCIE (0000:00:11.0) NSID 1 from core 0: 7990.14 31.21 2002.05 787.66 8110.65 00:09:36.935 PCIE (0000:00:12.0) NSID 1 from core 0: 7990.14 31.21 2002.04 767.42 7499.20 00:09:36.935 PCIE (0000:00:12.0) NSID 2 from core 0: 7990.14 31.21 2002.04 725.07 7643.34 00:09:36.935 PCIE (0000:00:12.0) NSID 3 from core 0: 7990.14 31.21 2002.03 733.55 7028.68 00:09:36.935 ======================================================== 00:09:36.935 Total : 47940.82 187.27 2001.89 720.65 8110.65 00:09:36.935 00:09:36.935 04:26:33 nvme.nvme_multi_secondary -- nvme/nvme.sh@57 -- # wait 63792 00:09:36.935 04:26:33 nvme.nvme_multi_secondary -- nvme/nvme.sh@61 -- # pid0=63866 00:09:36.935 04:26:33 nvme.nvme_multi_secondary -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:09:36.935 04:26:33 nvme.nvme_multi_secondary -- nvme/nvme.sh@63 -- # pid1=63867 00:09:36.935 04:26:33 nvme.nvme_multi_secondary -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:09:36.935 04:26:33 nvme.nvme_multi_secondary -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:09:40.235 Initializing NVMe Controllers 00:09:40.235 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:40.235 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:40.235 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:40.235 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:40.235 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:09:40.235 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:09:40.235 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:09:40.235 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:09:40.235 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:09:40.235 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:09:40.235 Initialization complete. Launching workers. 00:09:40.235 ======================================================== 00:09:40.235 Latency(us) 00:09:40.235 Device Information : IOPS MiB/s Average min max 00:09:40.235 PCIE (0000:00:13.0) NSID 1 from core 0: 5029.72 19.65 3180.69 816.90 11854.43 00:09:40.235 PCIE (0000:00:10.0) NSID 1 from core 0: 5029.72 19.65 3180.95 783.29 11884.69 00:09:40.235 PCIE (0000:00:11.0) NSID 1 from core 0: 5029.72 19.65 3182.21 816.65 12378.58 00:09:40.235 PCIE (0000:00:12.0) NSID 1 from core 0: 5029.72 19.65 3183.18 813.53 13398.74 00:09:40.235 PCIE (0000:00:12.0) NSID 2 from core 0: 5029.72 19.65 3183.33 827.46 12030.24 00:09:40.235 PCIE (0000:00:12.0) NSID 3 from core 0: 5029.72 19.65 3183.94 823.32 12486.10 00:09:40.235 ======================================================== 00:09:40.235 Total : 30178.34 117.88 3182.38 783.29 13398.74 00:09:40.235 00:09:40.235 Initializing NVMe Controllers 00:09:40.235 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:40.235 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:40.235 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:40.235 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:40.235 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:09:40.235 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:09:40.235 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:09:40.235 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:09:40.235 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:09:40.235 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:09:40.235 Initialization complete. Launching workers. 00:09:40.235 ======================================================== 00:09:40.235 Latency(us) 00:09:40.235 Device Information : IOPS MiB/s Average min max 00:09:40.235 PCIE (0000:00:13.0) NSID 1 from core 1: 4786.12 18.70 3342.49 1352.40 12199.86 00:09:40.235 PCIE (0000:00:10.0) NSID 1 from core 1: 4786.12 18.70 3341.74 1170.22 12025.34 00:09:40.235 PCIE (0000:00:11.0) NSID 1 from core 1: 4786.12 18.70 3342.89 1200.41 11865.20 00:09:40.235 PCIE (0000:00:12.0) NSID 1 from core 1: 4786.12 18.70 3342.85 1240.03 11445.39 00:09:40.235 PCIE (0000:00:12.0) NSID 2 from core 1: 4786.12 18.70 3343.69 1251.32 11045.90 00:09:40.235 PCIE (0000:00:12.0) NSID 3 from core 1: 4786.12 18.70 3344.61 1279.25 11740.63 00:09:40.235 ======================================================== 00:09:40.235 Total : 28716.73 112.17 3343.04 1170.22 12199.86 00:09:40.235 00:09:42.776 Initializing NVMe Controllers 00:09:42.776 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:42.776 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:42.776 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:42.776 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:42.776 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:09:42.776 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:09:42.776 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:09:42.776 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:09:42.776 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:09:42.776 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:09:42.776 Initialization complete. Launching workers. 00:09:42.776 ======================================================== 00:09:42.776 Latency(us) 00:09:42.776 Device Information : IOPS MiB/s Average min max 00:09:42.776 PCIE (0000:00:13.0) NSID 1 from core 2: 2916.64 11.39 5485.08 993.43 34935.04 00:09:42.776 PCIE (0000:00:10.0) NSID 1 from core 2: 2916.64 11.39 5484.40 1012.29 36555.55 00:09:42.776 PCIE (0000:00:11.0) NSID 1 from core 2: 2916.64 11.39 5485.17 965.84 33223.02 00:09:42.776 PCIE (0000:00:12.0) NSID 1 from core 2: 2916.64 11.39 5485.08 1008.44 35325.66 00:09:42.776 PCIE (0000:00:12.0) NSID 2 from core 2: 2916.64 11.39 5484.71 1068.08 35066.89 00:09:42.776 PCIE (0000:00:12.0) NSID 3 from core 2: 2916.64 11.39 5484.91 1068.39 35477.13 00:09:42.776 ======================================================== 00:09:42.776 Total : 17499.86 68.36 5484.89 965.84 36555.55 00:09:42.776 00:09:42.776 04:26:38 nvme.nvme_multi_secondary -- nvme/nvme.sh@65 -- # wait 63866 00:09:42.776 04:26:38 nvme.nvme_multi_secondary -- nvme/nvme.sh@66 -- # wait 63867 00:09:42.776 00:09:42.776 real 0m10.849s 00:09:42.776 user 0m18.405s 00:09:42.776 sys 0m0.602s 00:09:42.776 04:26:38 nvme.nvme_multi_secondary -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:42.776 04:26:38 nvme.nvme_multi_secondary -- common/autotest_common.sh@10 -- # set +x 00:09:42.776 ************************************ 00:09:42.776 END TEST nvme_multi_secondary 00:09:42.776 ************************************ 00:09:42.776 04:26:38 nvme -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:09:42.776 04:26:38 nvme -- nvme/nvme.sh@102 -- # kill_stub 00:09:42.776 04:26:38 nvme -- common/autotest_common.sh@1093 -- # [[ -e /proc/62806 ]] 00:09:42.776 04:26:38 nvme -- common/autotest_common.sh@1094 -- # kill 62806 00:09:42.776 04:26:38 nvme -- common/autotest_common.sh@1095 -- # wait 62806 00:09:42.776 [2024-11-27 04:26:38.849656] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63739) is not found. Dropping the request. 00:09:42.776 [2024-11-27 04:26:38.849740] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63739) is not found. Dropping the request. 00:09:42.776 [2024-11-27 04:26:38.849766] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63739) is not found. Dropping the request. 00:09:42.776 [2024-11-27 04:26:38.849782] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63739) is not found. Dropping the request. 00:09:42.776 [2024-11-27 04:26:38.852056] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63739) is not found. Dropping the request. 00:09:42.776 [2024-11-27 04:26:38.852121] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63739) is not found. Dropping the request. 00:09:42.776 [2024-11-27 04:26:38.852136] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63739) is not found. Dropping the request. 00:09:42.776 [2024-11-27 04:26:38.852150] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63739) is not found. Dropping the request. 00:09:42.776 [2024-11-27 04:26:38.854375] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63739) is not found. Dropping the request. 00:09:42.776 [2024-11-27 04:26:38.854422] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63739) is not found. Dropping the request. 00:09:42.776 [2024-11-27 04:26:38.854436] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63739) is not found. Dropping the request. 00:09:42.776 [2024-11-27 04:26:38.854450] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63739) is not found. Dropping the request. 00:09:42.776 [2024-11-27 04:26:38.857656] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63739) is not found. Dropping the request. 00:09:42.776 [2024-11-27 04:26:38.858142] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63739) is not found. Dropping the request. 00:09:42.776 [2024-11-27 04:26:38.858216] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63739) is not found. Dropping the request. 00:09:42.776 [2024-11-27 04:26:38.858685] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63739) is not found. Dropping the request. 00:09:42.776 04:26:38 nvme -- common/autotest_common.sh@1097 -- # rm -f /var/run/spdk_stub0 00:09:42.776 04:26:38 nvme -- common/autotest_common.sh@1101 -- # echo 2 00:09:42.776 04:26:38 nvme -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:09:42.776 04:26:38 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:42.776 04:26:38 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:42.776 04:26:38 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:42.776 ************************************ 00:09:42.776 START TEST bdev_nvme_reset_stuck_adm_cmd 00:09:42.776 ************************************ 00:09:42.776 04:26:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:09:42.776 * Looking for test storage... 00:09:42.776 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:42.777 04:26:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:42.777 04:26:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1693 -- # lcov --version 00:09:42.777 04:26:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:42.777 04:26:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:42.777 04:26:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:42.777 04:26:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:42.777 04:26:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:42.777 04:26:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # IFS=.-: 00:09:42.777 04:26:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # read -ra ver1 00:09:42.777 04:26:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # IFS=.-: 00:09:42.777 04:26:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # read -ra ver2 00:09:42.777 04:26:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@338 -- # local 'op=<' 00:09:42.777 04:26:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@340 -- # ver1_l=2 00:09:42.777 04:26:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@341 -- # ver2_l=1 00:09:42.777 04:26:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:42.777 04:26:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@344 -- # case "$op" in 00:09:42.777 04:26:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@345 -- # : 1 00:09:42.777 04:26:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:42.777 04:26:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:42.777 04:26:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # decimal 1 00:09:42.777 04:26:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=1 00:09:42.777 04:26:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:42.777 04:26:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 1 00:09:42.777 04:26:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # ver1[v]=1 00:09:42.777 04:26:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # decimal 2 00:09:42.777 04:26:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=2 00:09:42.777 04:26:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:42.777 04:26:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 2 00:09:42.777 04:26:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # ver2[v]=2 00:09:42.777 04:26:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:42.777 04:26:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:42.777 04:26:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # return 0 00:09:42.777 04:26:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:42.777 04:26:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:42.777 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:42.777 --rc genhtml_branch_coverage=1 00:09:42.777 --rc genhtml_function_coverage=1 00:09:42.777 --rc genhtml_legend=1 00:09:42.777 --rc geninfo_all_blocks=1 00:09:42.777 --rc geninfo_unexecuted_blocks=1 00:09:42.777 00:09:42.777 ' 00:09:42.777 04:26:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:42.777 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:42.777 --rc genhtml_branch_coverage=1 00:09:42.777 --rc genhtml_function_coverage=1 00:09:42.777 --rc genhtml_legend=1 00:09:42.777 --rc geninfo_all_blocks=1 00:09:42.777 --rc geninfo_unexecuted_blocks=1 00:09:42.777 00:09:42.777 ' 00:09:42.777 04:26:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:42.777 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:42.777 --rc genhtml_branch_coverage=1 00:09:42.777 --rc genhtml_function_coverage=1 00:09:42.777 --rc genhtml_legend=1 00:09:42.777 --rc geninfo_all_blocks=1 00:09:42.777 --rc geninfo_unexecuted_blocks=1 00:09:42.777 00:09:42.777 ' 00:09:42.777 04:26:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:42.777 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:42.777 --rc genhtml_branch_coverage=1 00:09:42.777 --rc genhtml_function_coverage=1 00:09:42.777 --rc genhtml_legend=1 00:09:42.777 --rc geninfo_all_blocks=1 00:09:42.777 --rc geninfo_unexecuted_blocks=1 00:09:42.777 00:09:42.777 ' 00:09:42.777 04:26:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:09:42.777 04:26:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:09:42.777 04:26:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:09:42.777 04:26:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:09:42.777 04:26:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:09:42.777 04:26:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:09:42.777 04:26:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # bdfs=() 00:09:42.777 04:26:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # local bdfs 00:09:42.777 04:26:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:09:42.777 04:26:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:09:42.777 04:26:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # bdfs=() 00:09:42.777 04:26:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # local bdfs 00:09:42.777 04:26:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:42.777 04:26:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:09:42.777 04:26:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:42.777 04:26:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:09:42.777 04:26:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:42.777 04:26:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1512 -- # echo 0000:00:10.0 00:09:42.777 04:26:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:10.0 00:09:42.777 04:26:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:10.0 ']' 00:09:42.777 04:26:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=64030 00:09:42.777 04:26:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:09:42.777 04:26:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 64030 00:09:42.777 04:26:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@835 -- # '[' -z 64030 ']' 00:09:42.777 04:26:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:09:42.777 04:26:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:42.777 04:26:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:42.777 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:42.777 04:26:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:42.777 04:26:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:42.777 04:26:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:42.777 [2024-11-27 04:26:39.274151] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:09:42.777 [2024-11-27 04:26:39.274278] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64030 ] 00:09:43.035 [2024-11-27 04:26:39.444256] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:43.035 [2024-11-27 04:26:39.547650] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:43.035 [2024-11-27 04:26:39.547839] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:43.035 [2024-11-27 04:26:39.548311] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:43.035 [2024-11-27 04:26:39.548431] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:43.604 04:26:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:43.604 04:26:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@868 -- # return 0 00:09:43.604 04:26:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:10.0 00:09:43.604 04:26:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.604 04:26:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:43.866 nvme0n1 00:09:43.866 04:26:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.866 04:26:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:09:43.866 04:26:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_y5Pqm.txt 00:09:43.866 04:26:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:09:43.866 04:26:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.866 04:26:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:43.866 true 00:09:43.866 04:26:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.866 04:26:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:09:43.866 04:26:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1732681600 00:09:43.866 04:26:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=64053 00:09:43.866 04:26:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:09:43.866 04:26:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:09:43.866 04:26:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:09:45.779 04:26:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:09:45.779 04:26:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.779 04:26:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:45.779 [2024-11-27 04:26:42.249913] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:09:45.779 [2024-11-27 04:26:42.250208] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:09:45.779 [2024-11-27 04:26:42.250232] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:09:45.779 [2024-11-27 04:26:42.250246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:45.779 [2024-11-27 04:26:42.252251] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:09:45.779 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 64053 00:09:45.779 04:26:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.779 04:26:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 64053 00:09:45.779 04:26:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 64053 00:09:45.779 04:26:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:09:45.779 04:26:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:09:45.779 04:26:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:09:45.779 04:26:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.779 04:26:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:45.779 04:26:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.779 04:26:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:09:45.779 04:26:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_y5Pqm.txt 00:09:45.779 04:26:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:09:45.780 04:26:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:09:45.780 04:26:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:09:45.780 04:26:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:09:45.780 04:26:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:09:45.780 04:26:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:09:45.780 04:26:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:09:45.780 04:26:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:09:45.780 04:26:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:09:45.780 04:26:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:09:45.780 04:26:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:09:45.780 04:26:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:09:45.780 04:26:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:09:45.780 04:26:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:09:45.780 04:26:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:09:45.780 04:26:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:09:45.780 04:26:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:09:45.780 04:26:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:09:45.780 04:26:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:09:45.780 04:26:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_y5Pqm.txt 00:09:45.780 04:26:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 64030 00:09:45.780 04:26:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@954 -- # '[' -z 64030 ']' 00:09:45.780 04:26:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@958 -- # kill -0 64030 00:09:45.780 04:26:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@959 -- # uname 00:09:45.780 04:26:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:45.780 04:26:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64030 00:09:46.040 killing process with pid 64030 00:09:46.040 04:26:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:46.040 04:26:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:46.040 04:26:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64030' 00:09:46.040 04:26:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@973 -- # kill 64030 00:09:46.040 04:26:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@978 -- # wait 64030 00:09:47.423 04:26:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:09:47.423 04:26:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:09:47.423 ************************************ 00:09:47.423 END TEST bdev_nvme_reset_stuck_adm_cmd 00:09:47.423 ************************************ 00:09:47.423 00:09:47.423 real 0m4.714s 00:09:47.423 user 0m16.798s 00:09:47.424 sys 0m0.488s 00:09:47.424 04:26:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:47.424 04:26:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:47.424 04:26:43 nvme -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:09:47.424 04:26:43 nvme -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:09:47.424 04:26:43 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:47.424 04:26:43 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:47.424 04:26:43 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:47.424 ************************************ 00:09:47.424 START TEST nvme_fio 00:09:47.424 ************************************ 00:09:47.424 04:26:43 nvme.nvme_fio -- common/autotest_common.sh@1129 -- # nvme_fio_test 00:09:47.424 04:26:43 nvme.nvme_fio -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:09:47.424 04:26:43 nvme.nvme_fio -- nvme/nvme.sh@32 -- # ran_fio=false 00:09:47.424 04:26:43 nvme.nvme_fio -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:09:47.424 04:26:43 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # bdfs=() 00:09:47.424 04:26:43 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # local bdfs 00:09:47.424 04:26:43 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:47.424 04:26:43 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:47.424 04:26:43 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:09:47.424 04:26:43 nvme.nvme_fio -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:09:47.424 04:26:43 nvme.nvme_fio -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:47.424 04:26:43 nvme.nvme_fio -- nvme/nvme.sh@33 -- # bdfs=('0000:00:10.0' '0000:00:11.0' '0000:00:12.0' '0000:00:13.0') 00:09:47.424 04:26:43 nvme.nvme_fio -- nvme/nvme.sh@33 -- # local bdfs bdf 00:09:47.424 04:26:43 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:09:47.424 04:26:43 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:09:47.424 04:26:43 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:09:47.685 04:26:44 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:09:47.685 04:26:44 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:09:47.946 04:26:44 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:09:47.946 04:26:44 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:09:47.946 04:26:44 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:09:47.946 04:26:44 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:09:47.946 04:26:44 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:09:47.946 04:26:44 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:09:47.946 04:26:44 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:47.946 04:26:44 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:09:47.946 04:26:44 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:09:47.946 04:26:44 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:09:47.946 04:26:44 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:47.946 04:26:44 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:09:47.946 04:26:44 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:09:47.946 04:26:44 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:09:47.946 04:26:44 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:09:47.946 04:26:44 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:09:47.946 04:26:44 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:09:47.946 04:26:44 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:09:47.946 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:09:47.946 fio-3.35 00:09:47.946 Starting 1 thread 00:09:54.509 00:09:54.509 test: (groupid=0, jobs=1): err= 0: pid=64190: Wed Nov 27 04:26:50 2024 00:09:54.509 read: IOPS=22.5k, BW=87.7MiB/s (92.0MB/s)(176MiB/2001msec) 00:09:54.509 slat (usec): min=3, max=104, avg= 5.24, stdev= 2.29 00:09:54.509 clat (usec): min=214, max=9344, avg=2845.94, stdev=825.70 00:09:54.509 lat (usec): min=219, max=9377, avg=2851.19, stdev=826.93 00:09:54.509 clat percentiles (usec): 00:09:54.509 | 1.00th=[ 1876], 5.00th=[ 2278], 10.00th=[ 2343], 20.00th=[ 2409], 00:09:54.509 | 30.00th=[ 2442], 40.00th=[ 2474], 50.00th=[ 2507], 60.00th=[ 2573], 00:09:54.509 | 70.00th=[ 2704], 80.00th=[ 3195], 90.00th=[ 3982], 95.00th=[ 4752], 00:09:54.509 | 99.00th=[ 5866], 99.50th=[ 6259], 99.90th=[ 7242], 99.95th=[ 7570], 00:09:54.509 | 99.99th=[ 9241] 00:09:54.509 bw ( KiB/s): min=76752, max=96176, per=98.26%, avg=88296.00, stdev=10217.22, samples=3 00:09:54.509 iops : min=19188, max=24044, avg=22074.00, stdev=2554.31, samples=3 00:09:54.509 write: IOPS=22.3k, BW=87.2MiB/s (91.5MB/s)(175MiB/2001msec); 0 zone resets 00:09:54.509 slat (nsec): min=3465, max=88565, avg=5462.92, stdev=2231.67 00:09:54.509 clat (usec): min=229, max=9291, avg=2849.39, stdev=825.36 00:09:54.509 lat (usec): min=234, max=9299, avg=2854.85, stdev=826.54 00:09:54.509 clat percentiles (usec): 00:09:54.509 | 1.00th=[ 1860], 5.00th=[ 2278], 10.00th=[ 2343], 20.00th=[ 2409], 00:09:54.509 | 30.00th=[ 2442], 40.00th=[ 2474], 50.00th=[ 2507], 60.00th=[ 2573], 00:09:54.509 | 70.00th=[ 2737], 80.00th=[ 3195], 90.00th=[ 3982], 95.00th=[ 4752], 00:09:54.509 | 99.00th=[ 5800], 99.50th=[ 6259], 99.90th=[ 7308], 99.95th=[ 8586], 00:09:54.509 | 99.99th=[ 9110] 00:09:54.509 bw ( KiB/s): min=76608, max=96704, per=99.04%, avg=88450.67, stdev=10517.83, samples=3 00:09:54.509 iops : min=19152, max=24176, avg=22112.67, stdev=2629.46, samples=3 00:09:54.509 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01% 00:09:54.509 lat (msec) : 2=1.82%, 4=88.22%, 10=9.92% 00:09:54.509 cpu : usr=99.15%, sys=0.05%, ctx=3, majf=0, minf=607 00:09:54.509 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:09:54.509 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:54.509 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:54.509 issued rwts: total=44950,44678,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:54.509 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:54.509 00:09:54.509 Run status group 0 (all jobs): 00:09:54.509 READ: bw=87.7MiB/s (92.0MB/s), 87.7MiB/s-87.7MiB/s (92.0MB/s-92.0MB/s), io=176MiB (184MB), run=2001-2001msec 00:09:54.509 WRITE: bw=87.2MiB/s (91.5MB/s), 87.2MiB/s-87.2MiB/s (91.5MB/s-91.5MB/s), io=175MiB (183MB), run=2001-2001msec 00:09:54.509 ----------------------------------------------------- 00:09:54.509 Suppressions used: 00:09:54.509 count bytes template 00:09:54.509 1 32 /usr/src/fio/parse.c 00:09:54.509 1 8 libtcmalloc_minimal.so 00:09:54.509 ----------------------------------------------------- 00:09:54.509 00:09:54.509 04:26:50 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:09:54.509 04:26:50 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:09:54.509 04:26:50 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:09:54.509 04:26:50 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:09:54.509 04:26:50 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:09:54.509 04:26:50 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:09:54.768 04:26:51 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:09:54.768 04:26:51 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:09:54.768 04:26:51 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:09:54.768 04:26:51 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:09:54.768 04:26:51 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:09:54.768 04:26:51 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:09:54.768 04:26:51 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:54.768 04:26:51 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:09:54.768 04:26:51 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:09:54.768 04:26:51 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:09:54.768 04:26:51 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:54.768 04:26:51 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:09:54.768 04:26:51 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:09:54.768 04:26:51 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:09:54.768 04:26:51 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:09:54.768 04:26:51 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:09:54.768 04:26:51 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:09:54.768 04:26:51 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:09:55.026 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:09:55.026 fio-3.35 00:09:55.026 Starting 1 thread 00:10:01.577 00:10:01.577 test: (groupid=0, jobs=1): err= 0: pid=64251: Wed Nov 27 04:26:57 2024 00:10:01.577 read: IOPS=22.8k, BW=89.0MiB/s (93.3MB/s)(178MiB/2001msec) 00:10:01.577 slat (nsec): min=3344, max=86954, avg=5176.30, stdev=2297.18 00:10:01.577 clat (usec): min=206, max=8168, avg=2807.57, stdev=811.07 00:10:01.577 lat (usec): min=210, max=8207, avg=2812.75, stdev=812.48 00:10:01.577 clat percentiles (usec): 00:10:01.577 | 1.00th=[ 1975], 5.00th=[ 2311], 10.00th=[ 2376], 20.00th=[ 2409], 00:10:01.577 | 30.00th=[ 2442], 40.00th=[ 2474], 50.00th=[ 2507], 60.00th=[ 2573], 00:10:01.577 | 70.00th=[ 2638], 80.00th=[ 2835], 90.00th=[ 3851], 95.00th=[ 4752], 00:10:01.577 | 99.00th=[ 6194], 99.50th=[ 6587], 99.90th=[ 7308], 99.95th=[ 7570], 00:10:01.577 | 99.99th=[ 7832] 00:10:01.577 bw ( KiB/s): min=89384, max=93520, per=100.00%, avg=91256.00, stdev=2095.68, samples=3 00:10:01.577 iops : min=22346, max=23380, avg=22814.00, stdev=523.92, samples=3 00:10:01.577 write: IOPS=22.7k, BW=88.5MiB/s (92.8MB/s)(177MiB/2001msec); 0 zone resets 00:10:01.577 slat (nsec): min=3469, max=88453, avg=5406.17, stdev=2193.14 00:10:01.577 clat (usec): min=233, max=7981, avg=2804.86, stdev=799.36 00:10:01.577 lat (usec): min=238, max=7985, avg=2810.27, stdev=800.70 00:10:01.577 clat percentiles (usec): 00:10:01.577 | 1.00th=[ 1975], 5.00th=[ 2311], 10.00th=[ 2376], 20.00th=[ 2409], 00:10:01.577 | 30.00th=[ 2442], 40.00th=[ 2474], 50.00th=[ 2540], 60.00th=[ 2573], 00:10:01.577 | 70.00th=[ 2638], 80.00th=[ 2835], 90.00th=[ 3818], 95.00th=[ 4686], 00:10:01.577 | 99.00th=[ 6194], 99.50th=[ 6587], 99.90th=[ 7242], 99.95th=[ 7570], 00:10:01.577 | 99.99th=[ 7767] 00:10:01.577 bw ( KiB/s): min=88400, max=93272, per=100.00%, avg=91413.33, stdev=2633.26, samples=3 00:10:01.577 iops : min=22100, max=23318, avg=22853.33, stdev=658.31, samples=3 00:10:01.577 lat (usec) : 250=0.01%, 500=0.02%, 750=0.01%, 1000=0.01% 00:10:01.577 lat (msec) : 2=1.07%, 4=91.43%, 10=7.46% 00:10:01.577 cpu : usr=99.25%, sys=0.00%, ctx=3, majf=0, minf=606 00:10:01.577 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:10:01.577 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:01.577 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:01.577 issued rwts: total=45590,45329,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:01.577 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:01.577 00:10:01.577 Run status group 0 (all jobs): 00:10:01.577 READ: bw=89.0MiB/s (93.3MB/s), 89.0MiB/s-89.0MiB/s (93.3MB/s-93.3MB/s), io=178MiB (187MB), run=2001-2001msec 00:10:01.577 WRITE: bw=88.5MiB/s (92.8MB/s), 88.5MiB/s-88.5MiB/s (92.8MB/s-92.8MB/s), io=177MiB (186MB), run=2001-2001msec 00:10:01.577 ----------------------------------------------------- 00:10:01.577 Suppressions used: 00:10:01.577 count bytes template 00:10:01.577 1 32 /usr/src/fio/parse.c 00:10:01.577 1 8 libtcmalloc_minimal.so 00:10:01.577 ----------------------------------------------------- 00:10:01.577 00:10:01.577 04:26:57 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:10:01.577 04:26:57 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:10:01.577 04:26:57 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:10:01.577 04:26:57 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:10:01.835 04:26:58 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:10:01.835 04:26:58 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:10:01.835 04:26:58 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:10:01.835 04:26:58 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:10:01.835 04:26:58 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:10:01.835 04:26:58 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:10:01.835 04:26:58 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:10:01.835 04:26:58 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:10:01.835 04:26:58 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:01.835 04:26:58 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:10:01.835 04:26:58 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:10:01.835 04:26:58 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:10:01.835 04:26:58 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:01.835 04:26:58 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:10:01.835 04:26:58 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:10:01.835 04:26:58 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:10:01.835 04:26:58 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:10:01.835 04:26:58 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:10:01.835 04:26:58 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:10:01.835 04:26:58 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:10:02.095 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:10:02.095 fio-3.35 00:10:02.095 Starting 1 thread 00:10:10.200 00:10:10.200 test: (groupid=0, jobs=1): err= 0: pid=64306: Wed Nov 27 04:27:06 2024 00:10:10.200 read: IOPS=23.3k, BW=91.1MiB/s (95.5MB/s)(182MiB/2001msec) 00:10:10.200 slat (nsec): min=3355, max=69767, avg=5058.66, stdev=2021.81 00:10:10.200 clat (usec): min=211, max=9370, avg=2733.60, stdev=736.48 00:10:10.200 lat (usec): min=215, max=9382, avg=2738.65, stdev=737.63 00:10:10.200 clat percentiles (usec): 00:10:10.200 | 1.00th=[ 1827], 5.00th=[ 2212], 10.00th=[ 2343], 20.00th=[ 2409], 00:10:10.200 | 30.00th=[ 2442], 40.00th=[ 2474], 50.00th=[ 2507], 60.00th=[ 2540], 00:10:10.200 | 70.00th=[ 2606], 80.00th=[ 2769], 90.00th=[ 3752], 95.00th=[ 4178], 00:10:10.200 | 99.00th=[ 5932], 99.50th=[ 6390], 99.90th=[ 6915], 99.95th=[ 7308], 00:10:10.200 | 99.99th=[ 7570] 00:10:10.200 bw ( KiB/s): min=89360, max=100240, per=100.00%, avg=94482.67, stdev=5467.70, samples=3 00:10:10.200 iops : min=22340, max=25060, avg=23620.67, stdev=1366.92, samples=3 00:10:10.200 write: IOPS=23.2k, BW=90.5MiB/s (94.9MB/s)(181MiB/2001msec); 0 zone resets 00:10:10.200 slat (nsec): min=3475, max=61113, avg=5305.04, stdev=1965.25 00:10:10.200 clat (usec): min=246, max=9767, avg=2755.84, stdev=793.31 00:10:10.200 lat (usec): min=251, max=9772, avg=2761.14, stdev=794.36 00:10:10.200 clat percentiles (usec): 00:10:10.200 | 1.00th=[ 1860], 5.00th=[ 2212], 10.00th=[ 2343], 20.00th=[ 2409], 00:10:10.200 | 30.00th=[ 2442], 40.00th=[ 2474], 50.00th=[ 2507], 60.00th=[ 2540], 00:10:10.200 | 70.00th=[ 2606], 80.00th=[ 2802], 90.00th=[ 3785], 95.00th=[ 4228], 00:10:10.200 | 99.00th=[ 6128], 99.50th=[ 6652], 99.90th=[ 9634], 99.95th=[ 9634], 00:10:10.200 | 99.99th=[ 9765] 00:10:10.200 bw ( KiB/s): min=88848, max=99848, per=100.00%, avg=94466.67, stdev=5503.84, samples=3 00:10:10.200 iops : min=22212, max=24962, avg=23616.67, stdev=1375.96, samples=3 00:10:10.200 lat (usec) : 250=0.01%, 500=0.02%, 750=0.02%, 1000=0.02% 00:10:10.200 lat (msec) : 2=1.72%, 4=91.88%, 10=6.34% 00:10:10.200 cpu : usr=99.20%, sys=0.05%, ctx=11, majf=0, minf=606 00:10:10.200 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:10:10.200 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:10.200 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:10.200 issued rwts: total=46657,46341,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:10.200 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:10.200 00:10:10.200 Run status group 0 (all jobs): 00:10:10.200 READ: bw=91.1MiB/s (95.5MB/s), 91.1MiB/s-91.1MiB/s (95.5MB/s-95.5MB/s), io=182MiB (191MB), run=2001-2001msec 00:10:10.201 WRITE: bw=90.5MiB/s (94.9MB/s), 90.5MiB/s-90.5MiB/s (94.9MB/s-94.9MB/s), io=181MiB (190MB), run=2001-2001msec 00:10:10.201 ----------------------------------------------------- 00:10:10.201 Suppressions used: 00:10:10.201 count bytes template 00:10:10.201 1 32 /usr/src/fio/parse.c 00:10:10.201 1 8 libtcmalloc_minimal.so 00:10:10.201 ----------------------------------------------------- 00:10:10.201 00:10:10.201 04:27:06 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:10:10.201 04:27:06 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:10:10.201 04:27:06 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:10:10.201 04:27:06 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:10:10.459 04:27:06 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:10:10.459 04:27:06 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:10:10.717 04:27:07 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:10:10.717 04:27:07 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:10:10.717 04:27:07 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:10:10.717 04:27:07 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:10:10.717 04:27:07 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:10:10.717 04:27:07 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:10:10.717 04:27:07 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:10.717 04:27:07 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:10:10.717 04:27:07 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:10:10.717 04:27:07 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:10:10.717 04:27:07 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:10.717 04:27:07 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:10:10.717 04:27:07 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:10:10.717 04:27:07 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:10:10.717 04:27:07 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:10:10.717 04:27:07 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:10:10.717 04:27:07 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:10:10.717 04:27:07 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:10:10.717 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:10:10.717 fio-3.35 00:10:10.717 Starting 1 thread 00:10:25.586 00:10:25.586 test: (groupid=0, jobs=1): err= 0: pid=64367: Wed Nov 27 04:27:19 2024 00:10:25.586 read: IOPS=24.0k, BW=93.7MiB/s (98.2MB/s)(187MiB/2001msec) 00:10:25.586 slat (usec): min=3, max=114, avg= 4.90, stdev= 2.01 00:10:25.586 clat (usec): min=169, max=7819, avg=2665.42, stdev=723.20 00:10:25.586 lat (usec): min=173, max=7826, avg=2670.32, stdev=724.30 00:10:25.586 clat percentiles (usec): 00:10:25.586 | 1.00th=[ 1336], 5.00th=[ 1975], 10.00th=[ 2147], 20.00th=[ 2343], 00:10:25.586 | 30.00th=[ 2409], 40.00th=[ 2442], 50.00th=[ 2507], 60.00th=[ 2540], 00:10:25.586 | 70.00th=[ 2606], 80.00th=[ 2802], 90.00th=[ 3490], 95.00th=[ 4113], 00:10:25.586 | 99.00th=[ 5800], 99.50th=[ 6259], 99.90th=[ 7046], 99.95th=[ 7242], 00:10:25.586 | 99.99th=[ 7701] 00:10:25.586 bw ( KiB/s): min=91304, max=95048, per=96.65%, avg=92712.00, stdev=2037.22, samples=3 00:10:25.586 iops : min=22826, max=23762, avg=23178.00, stdev=509.31, samples=3 00:10:25.586 write: IOPS=23.8k, BW=93.1MiB/s (97.6MB/s)(186MiB/2001msec); 0 zone resets 00:10:25.586 slat (nsec): min=3444, max=68415, avg=5156.74, stdev=2028.95 00:10:25.586 clat (usec): min=266, max=7702, avg=2670.41, stdev=727.31 00:10:25.586 lat (usec): min=270, max=7708, avg=2675.57, stdev=728.42 00:10:25.586 clat percentiles (usec): 00:10:25.586 | 1.00th=[ 1319], 5.00th=[ 1958], 10.00th=[ 2147], 20.00th=[ 2343], 00:10:25.586 | 30.00th=[ 2409], 40.00th=[ 2442], 50.00th=[ 2507], 60.00th=[ 2540], 00:10:25.586 | 70.00th=[ 2606], 80.00th=[ 2802], 90.00th=[ 3523], 95.00th=[ 4113], 00:10:25.586 | 99.00th=[ 5866], 99.50th=[ 6325], 99.90th=[ 6980], 99.95th=[ 7242], 00:10:25.586 | 99.99th=[ 7635] 00:10:25.586 bw ( KiB/s): min=90488, max=95032, per=97.34%, avg=92802.67, stdev=2273.20, samples=3 00:10:25.586 iops : min=22622, max=23758, avg=23200.67, stdev=568.30, samples=3 00:10:25.586 lat (usec) : 250=0.01%, 500=0.02%, 750=0.05%, 1000=0.19% 00:10:25.586 lat (msec) : 2=5.25%, 4=88.77%, 10=5.71% 00:10:25.586 cpu : usr=99.25%, sys=0.00%, ctx=4, majf=0, minf=605 00:10:25.586 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:10:25.586 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:25.586 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:25.586 issued rwts: total=47985,47693,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:25.586 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:25.586 00:10:25.586 Run status group 0 (all jobs): 00:10:25.586 READ: bw=93.7MiB/s (98.2MB/s), 93.7MiB/s-93.7MiB/s (98.2MB/s-98.2MB/s), io=187MiB (197MB), run=2001-2001msec 00:10:25.586 WRITE: bw=93.1MiB/s (97.6MB/s), 93.1MiB/s-93.1MiB/s (97.6MB/s-97.6MB/s), io=186MiB (195MB), run=2001-2001msec 00:10:25.586 ----------------------------------------------------- 00:10:25.586 Suppressions used: 00:10:25.586 count bytes template 00:10:25.586 1 32 /usr/src/fio/parse.c 00:10:25.586 1 8 libtcmalloc_minimal.so 00:10:25.586 ----------------------------------------------------- 00:10:25.586 00:10:25.586 04:27:19 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:10:25.586 04:27:19 nvme.nvme_fio -- nvme/nvme.sh@46 -- # true 00:10:25.586 00:10:25.586 real 0m36.125s 00:10:25.586 user 0m21.104s 00:10:25.586 sys 0m27.085s 00:10:25.586 04:27:19 nvme.nvme_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:25.586 04:27:19 nvme.nvme_fio -- common/autotest_common.sh@10 -- # set +x 00:10:25.586 ************************************ 00:10:25.586 END TEST nvme_fio 00:10:25.586 ************************************ 00:10:25.586 00:10:25.586 real 1m46.817s 00:10:25.586 user 3m44.400s 00:10:25.586 sys 0m37.661s 00:10:25.586 04:27:19 nvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:25.586 04:27:19 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:25.586 ************************************ 00:10:25.586 END TEST nvme 00:10:25.586 ************************************ 00:10:25.586 04:27:19 -- spdk/autotest.sh@213 -- # [[ 0 -eq 1 ]] 00:10:25.586 04:27:19 -- spdk/autotest.sh@217 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:10:25.586 04:27:19 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:25.586 04:27:19 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:25.586 04:27:19 -- common/autotest_common.sh@10 -- # set +x 00:10:25.586 ************************************ 00:10:25.586 START TEST nvme_scc 00:10:25.586 ************************************ 00:10:25.586 04:27:19 nvme_scc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:10:25.586 * Looking for test storage... 00:10:25.586 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:10:25.586 04:27:20 nvme_scc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:25.586 04:27:20 nvme_scc -- common/autotest_common.sh@1693 -- # lcov --version 00:10:25.586 04:27:20 nvme_scc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:25.586 04:27:20 nvme_scc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:25.586 04:27:20 nvme_scc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:25.586 04:27:20 nvme_scc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:25.587 04:27:20 nvme_scc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:25.587 04:27:20 nvme_scc -- scripts/common.sh@336 -- # IFS=.-: 00:10:25.587 04:27:20 nvme_scc -- scripts/common.sh@336 -- # read -ra ver1 00:10:25.587 04:27:20 nvme_scc -- scripts/common.sh@337 -- # IFS=.-: 00:10:25.587 04:27:20 nvme_scc -- scripts/common.sh@337 -- # read -ra ver2 00:10:25.587 04:27:20 nvme_scc -- scripts/common.sh@338 -- # local 'op=<' 00:10:25.587 04:27:20 nvme_scc -- scripts/common.sh@340 -- # ver1_l=2 00:10:25.587 04:27:20 nvme_scc -- scripts/common.sh@341 -- # ver2_l=1 00:10:25.587 04:27:20 nvme_scc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:25.587 04:27:20 nvme_scc -- scripts/common.sh@344 -- # case "$op" in 00:10:25.587 04:27:20 nvme_scc -- scripts/common.sh@345 -- # : 1 00:10:25.587 04:27:20 nvme_scc -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:25.587 04:27:20 nvme_scc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:25.587 04:27:20 nvme_scc -- scripts/common.sh@365 -- # decimal 1 00:10:25.587 04:27:20 nvme_scc -- scripts/common.sh@353 -- # local d=1 00:10:25.587 04:27:20 nvme_scc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:25.587 04:27:20 nvme_scc -- scripts/common.sh@355 -- # echo 1 00:10:25.587 04:27:20 nvme_scc -- scripts/common.sh@365 -- # ver1[v]=1 00:10:25.587 04:27:20 nvme_scc -- scripts/common.sh@366 -- # decimal 2 00:10:25.587 04:27:20 nvme_scc -- scripts/common.sh@353 -- # local d=2 00:10:25.587 04:27:20 nvme_scc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:25.587 04:27:20 nvme_scc -- scripts/common.sh@355 -- # echo 2 00:10:25.587 04:27:20 nvme_scc -- scripts/common.sh@366 -- # ver2[v]=2 00:10:25.587 04:27:20 nvme_scc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:25.587 04:27:20 nvme_scc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:25.587 04:27:20 nvme_scc -- scripts/common.sh@368 -- # return 0 00:10:25.587 04:27:20 nvme_scc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:25.587 04:27:20 nvme_scc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:25.587 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:25.587 --rc genhtml_branch_coverage=1 00:10:25.587 --rc genhtml_function_coverage=1 00:10:25.587 --rc genhtml_legend=1 00:10:25.587 --rc geninfo_all_blocks=1 00:10:25.587 --rc geninfo_unexecuted_blocks=1 00:10:25.587 00:10:25.587 ' 00:10:25.587 04:27:20 nvme_scc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:25.587 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:25.587 --rc genhtml_branch_coverage=1 00:10:25.587 --rc genhtml_function_coverage=1 00:10:25.587 --rc genhtml_legend=1 00:10:25.587 --rc geninfo_all_blocks=1 00:10:25.587 --rc geninfo_unexecuted_blocks=1 00:10:25.587 00:10:25.587 ' 00:10:25.587 04:27:20 nvme_scc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:25.587 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:25.587 --rc genhtml_branch_coverage=1 00:10:25.587 --rc genhtml_function_coverage=1 00:10:25.587 --rc genhtml_legend=1 00:10:25.587 --rc geninfo_all_blocks=1 00:10:25.587 --rc geninfo_unexecuted_blocks=1 00:10:25.587 00:10:25.587 ' 00:10:25.587 04:27:20 nvme_scc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:25.587 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:25.587 --rc genhtml_branch_coverage=1 00:10:25.587 --rc genhtml_function_coverage=1 00:10:25.587 --rc genhtml_legend=1 00:10:25.587 --rc geninfo_all_blocks=1 00:10:25.587 --rc geninfo_unexecuted_blocks=1 00:10:25.587 00:10:25.587 ' 00:10:25.587 04:27:20 nvme_scc -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:10:25.587 04:27:20 nvme_scc -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:10:25.587 04:27:20 nvme_scc -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:10:25.587 04:27:20 nvme_scc -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:10:25.587 04:27:20 nvme_scc -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:25.587 04:27:20 nvme_scc -- scripts/common.sh@15 -- # shopt -s extglob 00:10:25.587 04:27:20 nvme_scc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:25.587 04:27:20 nvme_scc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:25.587 04:27:20 nvme_scc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:25.587 04:27:20 nvme_scc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:25.587 04:27:20 nvme_scc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:25.587 04:27:20 nvme_scc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:25.587 04:27:20 nvme_scc -- paths/export.sh@5 -- # export PATH 00:10:25.587 04:27:20 nvme_scc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:25.587 04:27:20 nvme_scc -- nvme/functions.sh@10 -- # ctrls=() 00:10:25.587 04:27:20 nvme_scc -- nvme/functions.sh@10 -- # declare -A ctrls 00:10:25.587 04:27:20 nvme_scc -- nvme/functions.sh@11 -- # nvmes=() 00:10:25.587 04:27:20 nvme_scc -- nvme/functions.sh@11 -- # declare -A nvmes 00:10:25.587 04:27:20 nvme_scc -- nvme/functions.sh@12 -- # bdfs=() 00:10:25.587 04:27:20 nvme_scc -- nvme/functions.sh@12 -- # declare -A bdfs 00:10:25.587 04:27:20 nvme_scc -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:10:25.587 04:27:20 nvme_scc -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:10:25.587 04:27:20 nvme_scc -- nvme/functions.sh@14 -- # nvme_name= 00:10:25.587 04:27:20 nvme_scc -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:25.587 04:27:20 nvme_scc -- nvme/nvme_scc.sh@12 -- # uname 00:10:25.587 04:27:20 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:10:25.587 04:27:20 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]] 00:10:25.587 04:27:20 nvme_scc -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:10:25.587 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:25.587 Waiting for block devices as requested 00:10:25.587 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:10:25.587 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:10:25.587 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:10:25.587 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:10:29.793 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:10:29.793 04:27:25 nvme_scc -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls 00:10:29.793 04:27:25 nvme_scc -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:10:29.793 04:27:25 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:29.793 04:27:25 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:10:29.793 04:27:25 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:10:29.793 04:27:25 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:10:29.793 04:27:25 nvme_scc -- scripts/common.sh@18 -- # local i 00:10:29.793 04:27:25 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:10:29.793 04:27:25 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:29.793 04:27:25 nvme_scc -- scripts/common.sh@27 -- # return 0 00:10:29.793 04:27:25 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:10:29.793 04:27:25 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:10:29.793 04:27:25 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:10:29.793 04:27:25 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:29.793 04:27:25 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:10:29.793 04:27:25 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:10:29.793 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.793 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.793 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:29.793 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.793 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.793 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:29.793 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:10:29.793 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:10:29.793 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.793 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.793 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:29.793 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:10:29.793 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:10:29.793 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.793 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.793 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:10:29.793 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:10:29.793 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:10:29.793 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.793 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.793 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:29.794 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:10:29.794 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:10:29.794 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.794 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.794 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:29.794 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:10:29.794 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:10:29.794 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.794 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.794 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:29.794 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:10:29.794 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:10:29.794 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.794 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.794 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:29.794 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:10:29.794 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:10:29.794 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.794 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.794 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.794 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:10:29.794 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:10:29.794 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.794 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.794 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:29.794 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:10:29.794 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:10:29.794 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.794 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.794 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.794 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:10:29.794 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:10:29.794 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.794 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.794 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:29.794 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:10:29.794 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:10:29.794 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.794 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.794 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.794 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:10:29.794 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:10:29.794 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.794 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.794 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.794 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:10:29.794 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:10:29.794 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.794 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.794 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:29.794 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:10:29.794 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:10:29.794 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.794 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.794 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:10:29.794 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:10:29.794 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:10:29.794 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.794 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.794 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.794 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:10:29.794 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:10:29.794 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.794 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.794 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:29.794 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:10:29.794 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:10:29.794 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.794 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.794 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:29.794 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:29.794 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:10:29.794 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.794 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.794 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.794 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:10:29.794 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:10:29.794 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.794 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.794 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.794 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:10:29.794 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:10:29.794 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.794 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.794 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.794 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:10:29.794 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:10:29.794 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.794 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.794 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.794 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:10:29.794 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:10:29.794 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.794 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.794 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.794 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:10:29.794 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:10:29.794 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.794 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.794 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.794 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:10:29.794 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:10:29.794 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.794 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.794 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:29.794 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:10:29.794 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:10:29.794 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.794 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.794 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:29.794 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:10:29.795 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:10:29.795 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.795 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.795 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:29.795 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:10:29.795 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:10:29.795 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.795 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.795 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:29.795 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:10:29.795 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:10:29.795 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.795 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.795 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:29.795 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:10:29.795 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:10:29.795 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.795 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.795 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.795 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:10:29.795 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:10:29.795 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.795 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.795 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.795 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:10:29.795 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:10:29.795 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.795 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.795 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.795 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:10:29.795 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:10:29.795 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.795 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.795 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.795 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:10:29.795 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:10:29.795 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.795 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.795 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:29.795 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:10:29.795 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:10:29.795 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.795 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.795 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:29.795 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:10:29.795 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:10:29.795 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.795 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.795 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.795 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:10:29.795 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:10:29.795 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.795 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.795 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.795 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:10:29.795 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:10:29.795 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.795 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.795 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.795 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:10:29.795 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:10:29.795 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.795 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.795 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.795 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:10:29.795 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:10:29.795 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.795 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.795 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.795 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:10:29.795 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:10:29.795 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.795 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.795 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.795 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:10:29.795 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:10:29.795 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.795 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.795 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.795 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:10:29.795 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:10:29.795 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.795 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.795 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.795 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:10:29.795 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:10:29.795 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.795 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.795 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.795 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:10:29.795 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:10:29.795 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.795 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.795 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.795 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:10:29.796 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:10:29.796 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.796 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.796 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.796 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:10:29.796 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:10:29.796 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.796 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.796 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.796 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:10:29.796 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:10:29.796 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.796 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.796 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.796 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:10:29.796 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:10:29.796 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.796 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.796 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.796 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:10:29.796 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:10:29.796 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.796 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.796 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.796 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:10:29.796 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:10:29.796 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.796 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.796 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.796 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:10:29.796 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:10:29.796 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.796 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.796 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.796 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:10:29.796 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:10:29.796 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.796 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.796 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.796 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:10:29.796 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:10:29.796 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.796 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.796 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.796 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:10:29.796 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:10:29.796 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.796 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.796 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.796 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:10:29.796 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:10:29.796 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.796 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.796 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.796 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:10:29.796 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:10:29.796 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.796 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.796 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.796 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:10:29.796 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:10:29.796 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.796 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.796 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.796 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:10:29.796 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:10:29.796 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.796 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.796 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.796 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:10:29.796 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:10:29.796 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.796 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.796 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.796 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:10:29.796 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:10:29.796 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.796 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.796 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:29.796 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:10:29.796 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:10:29.796 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.796 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.796 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:29.796 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:10:29.796 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:10:29.796 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.796 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.796 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.796 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:10:29.796 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:10:29.796 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.796 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.796 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:29.796 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:10:29.796 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:10:29.796 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.796 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.797 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:29.797 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:10:29.797 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:10:29.797 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.797 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.797 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.797 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:10:29.797 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:10:29.797 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.797 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.797 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.797 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:10:29.797 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:10:29.797 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.797 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.797 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:29.797 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:10:29.797 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:10:29.797 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.797 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.797 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.797 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:10:29.797 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:10:29.797 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.797 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.797 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.797 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:10:29.797 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:10:29.797 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.797 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.797 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.797 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:10:29.797 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:10:29.797 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.797 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.797 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.797 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:10:29.797 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:10:29.797 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.797 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.797 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.797 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:10:29.797 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:10:29.797 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.797 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.797 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:29.797 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:10:29.797 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:10:29.797 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.797 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.797 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:29.797 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:10:29.797 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:10:29.797 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.797 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.797 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.797 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:10:29.797 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:10:29.797 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.797 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.797 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.797 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:10:29.797 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:10:29.797 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.797 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.797 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.797 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:10:29.797 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:10:29.797 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.797 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.797 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:10:29.797 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:10:29.797 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:10:29.797 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.797 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.797 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.797 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:10:29.797 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:10:29.797 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.797 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.797 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.797 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:10:29.797 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:10:29.797 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.797 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.797 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.797 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:10:29.797 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:10:29.797 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.797 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.797 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.797 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:10:29.797 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:10:29.797 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.798 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.798 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.798 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:10:29.798 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:10:29.798 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.798 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.798 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.798 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:10:29.798 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:10:29.798 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.798 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.798 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:29.798 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:29.798 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:29.798 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.798 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.798 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:29.798 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:29.798 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:29.798 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.798 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.798 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:29.798 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:10:29.798 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:10:29.798 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.798 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.798 04:27:25 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:10:29.798 04:27:25 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:29.798 04:27:25 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/ng0n1 ]] 00:10:29.798 04:27:25 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng0n1 00:10:29.798 04:27:25 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng0n1 id-ns /dev/ng0n1 00:10:29.798 04:27:25 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng0n1 reg val 00:10:29.798 04:27:25 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:29.798 04:27:25 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng0n1=()' 00:10:29.798 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.798 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.798 04:27:25 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng0n1 00:10:29.798 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:29.798 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.798 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.798 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:29.798 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsze]="0x140000"' 00:10:29.798 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsze]=0x140000 00:10:29.798 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.798 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.798 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:29.798 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[ncap]="0x140000"' 00:10:29.798 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[ncap]=0x140000 00:10:29.798 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.798 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.798 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:29.798 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nuse]="0x140000"' 00:10:29.798 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nuse]=0x140000 00:10:29.798 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.798 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.798 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:29.798 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsfeat]="0x14"' 00:10:29.798 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsfeat]=0x14 00:10:29.798 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.798 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.798 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:29.798 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nlbaf]="7"' 00:10:29.798 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nlbaf]=7 00:10:29.798 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.798 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.798 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:29.798 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[flbas]="0x4"' 00:10:29.798 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[flbas]=0x4 00:10:29.798 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.798 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.798 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:29.798 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mc]="0x3"' 00:10:29.798 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mc]=0x3 00:10:29.798 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.798 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.798 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:29.798 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dpc]="0x1f"' 00:10:29.798 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dpc]=0x1f 00:10:29.798 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.798 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.798 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.798 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dps]="0"' 00:10:29.798 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dps]=0 00:10:29.798 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.798 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.798 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.798 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nmic]="0"' 00:10:29.798 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nmic]=0 00:10:29.798 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.798 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.798 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.798 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[rescap]="0"' 00:10:29.798 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[rescap]=0 00:10:29.799 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.799 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.799 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.799 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[fpi]="0"' 00:10:29.799 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[fpi]=0 00:10:29.799 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.799 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.799 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:29.799 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dlfeat]="1"' 00:10:29.799 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dlfeat]=1 00:10:29.799 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.799 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.799 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.799 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nawun]="0"' 00:10:29.799 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nawun]=0 00:10:29.799 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.799 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.799 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.799 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nawupf]="0"' 00:10:29.799 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nawupf]=0 00:10:29.799 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.799 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.799 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.799 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nacwu]="0"' 00:10:29.799 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nacwu]=0 00:10:29.799 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.799 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.799 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.799 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabsn]="0"' 00:10:29.799 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabsn]=0 00:10:29.799 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.799 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.799 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.799 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabo]="0"' 00:10:29.799 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabo]=0 00:10:29.799 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.799 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.799 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.799 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabspf]="0"' 00:10:29.799 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabspf]=0 00:10:29.799 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.799 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.799 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.799 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[noiob]="0"' 00:10:29.799 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[noiob]=0 00:10:29.799 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.799 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.799 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.799 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmcap]="0"' 00:10:29.799 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nvmcap]=0 00:10:29.799 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.799 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.799 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.799 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npwg]="0"' 00:10:29.799 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npwg]=0 00:10:29.799 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.799 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.799 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.799 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npwa]="0"' 00:10:29.799 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npwa]=0 00:10:29.799 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.799 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.799 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.799 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npdg]="0"' 00:10:29.799 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npdg]=0 00:10:29.799 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.799 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.799 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.799 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npda]="0"' 00:10:29.799 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npda]=0 00:10:29.799 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.799 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.799 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.799 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nows]="0"' 00:10:29.799 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nows]=0 00:10:29.799 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.799 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.799 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:29.799 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mssrl]="128"' 00:10:29.799 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mssrl]=128 00:10:29.799 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.799 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.799 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:29.799 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mcl]="128"' 00:10:29.799 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mcl]=128 00:10:29.799 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.799 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.799 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:29.799 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[msrc]="127"' 00:10:29.799 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[msrc]=127 00:10:29.799 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.799 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.799 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.799 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nulbaf]="0"' 00:10:29.799 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nulbaf]=0 00:10:29.799 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.799 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.799 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.799 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[anagrpid]="0"' 00:10:29.799 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[anagrpid]=0 00:10:29.799 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.800 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.800 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.800 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsattr]="0"' 00:10:29.800 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsattr]=0 00:10:29.800 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.800 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.800 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.800 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmsetid]="0"' 00:10:29.800 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nvmsetid]=0 00:10:29.800 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.800 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.800 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.800 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[endgid]="0"' 00:10:29.800 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[endgid]=0 00:10:29.800 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.800 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.800 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:29.800 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nguid]="00000000000000000000000000000000"' 00:10:29.800 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nguid]=00000000000000000000000000000000 00:10:29.800 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.800 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.800 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:29.800 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[eui64]="0000000000000000"' 00:10:29.800 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[eui64]=0000000000000000 00:10:29.800 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.800 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.800 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:29.800 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:29.800 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:29.800 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.800 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.800 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:29.800 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:29.800 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:29.800 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.800 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.800 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:29.800 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:29.800 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:29.800 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.800 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.800 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:29.800 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:29.800 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:29.800 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.800 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.800 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:29.800 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:29.800 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:29.800 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.800 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.800 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:29.800 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:29.800 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:29.800 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.800 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.800 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:29.800 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:29.800 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:29.800 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.800 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.800 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:29.800 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:29.800 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:29.800 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.800 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.800 04:27:25 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng0n1 00:10:29.800 04:27:25 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:29.800 04:27:25 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:10:29.800 04:27:25 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:10:29.800 04:27:25 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:10:29.800 04:27:25 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:10:29.800 04:27:25 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:29.800 04:27:25 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:10:29.800 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.800 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.800 04:27:25 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:10:29.800 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:29.800 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.800 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.800 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:29.800 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:10:29.800 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:10:29.800 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.801 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.801 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:29.801 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:10:29.801 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:10:29.801 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.801 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.801 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:29.801 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:10:29.801 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:10:29.801 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.801 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.801 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:29.801 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:10:29.801 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:10:29.801 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.801 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.801 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:29.801 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:10:29.801 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:10:29.801 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.801 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.801 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:29.801 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:10:29.801 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:10:29.801 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.801 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.801 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:29.801 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:10:29.801 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:10:29.801 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.801 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.801 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:29.801 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:10:29.801 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:10:29.801 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.801 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.801 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.801 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:10:29.801 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:10:29.801 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.801 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.801 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.801 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:10:29.801 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:10:29.801 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.801 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.801 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.801 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:10:29.801 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:10:29.801 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.801 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.801 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.801 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:10:29.801 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:10:29.801 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.801 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.801 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:29.801 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:10:29.801 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:10:29.801 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.801 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.801 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.801 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:10:29.801 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:10:29.801 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.801 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.801 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.801 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:10:29.801 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:10:29.801 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.801 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.801 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.801 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:10:29.801 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:10:29.801 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.801 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.801 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.801 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:10:29.801 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:10:29.801 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.801 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.801 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.801 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:10:29.801 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:10:29.801 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.801 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.801 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.801 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:10:29.801 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:10:29.801 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.801 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.801 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.801 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:10:29.801 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:10:29.801 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.801 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.801 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.801 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:10:29.801 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:10:29.801 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.801 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.801 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.801 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:10:29.801 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:10:29.801 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.801 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.801 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.801 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:10:29.801 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:10:29.801 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.801 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.801 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.801 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:10:29.801 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:10:29.801 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.801 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.801 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.801 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:10:29.801 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:10:29.801 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.801 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.801 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.801 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:10:29.801 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:10:29.801 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.801 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.801 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:29.801 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:10:29.801 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:10:29.801 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.801 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.801 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:29.801 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:10:29.801 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:10:29.801 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.801 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.802 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:29.802 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:10:29.802 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:10:29.802 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.802 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.802 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.802 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:10:29.802 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:10:29.802 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.802 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.802 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.802 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:10:29.802 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:10:29.802 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.802 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.802 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.802 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:10:29.802 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:10:29.802 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.802 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.802 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.802 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:10:29.802 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:10:29.802 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.802 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.802 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.802 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:10:29.802 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:10:29.802 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.802 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.802 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:29.802 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:10:29.802 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:10:29.802 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.802 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.802 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:29.802 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:10:29.802 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:10:29.802 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.802 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.802 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:29.802 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:29.802 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:29.802 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.802 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.802 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:29.802 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:29.802 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:29.802 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.802 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.802 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:29.802 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:29.802 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:29.802 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.802 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.802 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:29.802 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:29.802 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:29.802 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.802 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.802 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:29.802 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:29.802 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:29.802 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.802 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.802 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:29.802 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:29.802 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:29.802 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.802 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.802 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:29.802 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:29.802 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:29.802 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.802 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.802 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:29.802 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:29.802 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:29.802 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.802 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.802 04:27:25 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:10:29.802 04:27:25 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:10:29.802 04:27:25 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:10:29.802 04:27:25 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:10:29.802 04:27:25 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:10:29.802 04:27:25 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:29.802 04:27:25 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:10:29.802 04:27:25 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:10:29.802 04:27:25 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:10:29.802 04:27:25 nvme_scc -- scripts/common.sh@18 -- # local i 00:10:29.802 04:27:25 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:10:29.802 04:27:25 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:29.802 04:27:25 nvme_scc -- scripts/common.sh@27 -- # return 0 00:10:29.802 04:27:25 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:10:29.802 04:27:25 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:10:29.802 04:27:25 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:10:29.802 04:27:25 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:29.802 04:27:25 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:10:29.802 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.802 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.802 04:27:25 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:10:29.802 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:29.802 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.802 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.802 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:29.802 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:10:29.802 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:10:29.802 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.802 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.802 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:29.802 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:10:29.802 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:10:29.802 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.802 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.802 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:10:29.802 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:10:29.802 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:10:29.802 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.802 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.802 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:29.802 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:10:29.802 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:10:29.802 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.802 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.802 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:29.802 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:10:29.802 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:10:29.802 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.802 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.802 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:29.802 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:10:29.802 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:10:29.802 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.802 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.802 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:29.802 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:10:29.802 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:10:29.802 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.802 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.802 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.802 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:10:29.802 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:10:29.802 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.802 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.802 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:29.803 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:10:29.803 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:10:29.803 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.803 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.803 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.803 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:10:29.803 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:10:29.803 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.803 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.803 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:29.803 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:10:29.803 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:10:29.803 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.803 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.803 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.803 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:10:29.803 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:10:29.803 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.803 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.803 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.803 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:10:29.803 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:10:29.803 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.803 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.803 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:29.803 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:10:29.803 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:10:29.803 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.803 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.803 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:10:29.803 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:10:29.803 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:10:29.803 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.803 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.803 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.803 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:10:29.803 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:10:29.803 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.803 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.803 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:29.803 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:10:29.803 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:10:29.803 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.803 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.803 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:29.803 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:29.803 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:10:29.803 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.803 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.803 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.803 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:10:29.803 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:10:29.803 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.803 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.803 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.803 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:10:29.803 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:10:29.803 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.803 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.803 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.803 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:10:29.803 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:10:29.803 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.803 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.803 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.803 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:10:29.803 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:10:29.803 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.803 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.803 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.803 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:10:29.803 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:10:29.803 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.803 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.803 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.803 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:10:29.803 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:10:29.803 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.803 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.803 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:29.803 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:10:29.803 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:10:29.803 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.803 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.803 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:29.803 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:10:29.803 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:10:29.803 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.803 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.803 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:29.803 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:10:29.803 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:10:29.803 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.803 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.803 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:29.803 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:10:29.803 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:10:29.803 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.803 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.803 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:29.803 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:10:29.803 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:10:29.803 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.803 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.803 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.803 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:10:29.803 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:10:29.803 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.803 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.803 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.803 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:10:29.803 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:10:29.803 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.803 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.803 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.803 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:10:29.803 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:10:29.803 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.803 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.803 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.803 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:10:29.803 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:10:29.803 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.803 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.803 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:29.803 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:10:29.803 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:10:29.803 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.803 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.803 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:29.803 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:10:29.803 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:10:29.803 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.803 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.803 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.803 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:10:29.803 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:10:29.803 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.803 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.803 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.803 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:10:29.803 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:10:29.803 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.803 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.803 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.803 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:10:29.803 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:10:29.803 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.803 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.803 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.803 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:10:29.803 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:10:29.803 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.803 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.803 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.803 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:10:29.804 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:10:29.804 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.804 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.804 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.804 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:10:29.804 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:10:29.804 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.804 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.804 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.804 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:10:29.804 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:10:29.804 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.804 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.804 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.804 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:10:29.804 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:10:29.804 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.804 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.804 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.804 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:10:29.804 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:10:29.804 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.804 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.804 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.804 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:10:29.804 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:10:29.804 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.804 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.804 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.804 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:10:29.804 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:10:29.804 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.804 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.804 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.804 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:10:29.804 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:10:29.804 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.804 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.804 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.804 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:10:29.804 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:10:29.804 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.804 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.804 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.804 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:10:29.804 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:10:29.804 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.804 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.804 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.804 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:10:29.804 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:10:29.804 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.804 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.804 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.804 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:10:29.804 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:10:29.804 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.804 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.804 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.804 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:10:29.804 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:10:29.804 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.804 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.804 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.804 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:10:29.804 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:10:29.804 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.804 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.804 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.804 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:10:29.804 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:10:29.804 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.804 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.804 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.804 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:10:29.804 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:10:29.804 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.804 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.804 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.804 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:10:29.804 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:10:29.804 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.804 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.804 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.804 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:10:29.804 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:10:29.804 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.804 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.804 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.804 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:10:29.804 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:10:29.804 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.804 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.804 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.804 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:10:29.804 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:10:29.804 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.804 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.804 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.804 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:10:29.804 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:10:29.804 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.804 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.804 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:29.804 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:10:29.804 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:10:29.804 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.804 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.804 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:29.804 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:10:29.804 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:10:29.804 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.804 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.804 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.804 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:10:29.804 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:10:29.804 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.804 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.804 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:29.804 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:10:29.804 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:10:29.804 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.804 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.804 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:29.804 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:10:29.804 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:10:29.804 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.804 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.804 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.804 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:10:29.804 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:10:29.804 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.804 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.804 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.804 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:10:29.804 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:10:29.804 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.804 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.804 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:29.804 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:10:29.804 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:10:29.804 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.804 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.804 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.804 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:10:29.804 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:10:29.804 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.804 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.804 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.804 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:10:29.804 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:10:29.804 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.804 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.804 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.804 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:10:29.804 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:10:29.805 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.805 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.805 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.805 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:10:29.805 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:10:29.805 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.805 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.805 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.805 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:10:29.805 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:10:29.805 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.805 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.805 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:29.805 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:10:29.805 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:10:29.805 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.805 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.805 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:29.805 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:10:29.805 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:10:29.805 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.805 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.805 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.805 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:10:29.805 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:10:29.805 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.805 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.805 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.805 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:10:29.805 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:10:29.805 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.805 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.805 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.805 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:10:29.805 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:10:29.805 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.805 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.805 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:10:29.805 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:10:29.805 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:10:29.805 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.805 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.805 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.805 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:10:29.805 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:10:29.805 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.805 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.805 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.805 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:10:29.805 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:10:29.805 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.805 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.805 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.805 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:10:29.805 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:10:29.805 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.805 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.805 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.805 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:10:29.805 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:10:29.805 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.805 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.805 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.805 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:10:29.805 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:10:29.805 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.805 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.805 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.805 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:10:29.805 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:10:29.805 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.805 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.805 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:29.805 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:29.805 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:29.805 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.805 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.805 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:29.805 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:29.805 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:29.805 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.805 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.805 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:29.805 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:10:29.805 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:10:29.805 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.805 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.805 04:27:25 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:10:29.805 04:27:25 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:29.805 04:27:25 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/ng1n1 ]] 00:10:29.805 04:27:25 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng1n1 00:10:29.805 04:27:25 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng1n1 id-ns /dev/ng1n1 00:10:29.805 04:27:25 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng1n1 reg val 00:10:29.805 04:27:25 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:29.805 04:27:25 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng1n1=()' 00:10:29.805 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.805 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.805 04:27:25 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng1n1 00:10:29.805 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:29.805 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.805 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.805 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:29.805 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsze]="0x17a17a"' 00:10:29.805 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsze]=0x17a17a 00:10:29.805 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.805 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.805 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:29.805 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[ncap]="0x17a17a"' 00:10:29.805 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[ncap]=0x17a17a 00:10:29.805 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.805 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.805 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:29.805 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nuse]="0x17a17a"' 00:10:29.805 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nuse]=0x17a17a 00:10:29.805 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.805 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.805 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:29.805 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsfeat]="0x14"' 00:10:29.805 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsfeat]=0x14 00:10:29.805 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.805 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.805 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:29.805 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nlbaf]="7"' 00:10:29.805 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nlbaf]=7 00:10:29.805 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.805 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.805 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:29.805 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[flbas]="0x7"' 00:10:29.806 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[flbas]=0x7 00:10:29.806 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.806 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.806 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:29.806 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mc]="0x3"' 00:10:29.806 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mc]=0x3 00:10:29.806 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.806 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.806 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:29.806 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dpc]="0x1f"' 00:10:29.806 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dpc]=0x1f 00:10:29.806 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.806 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.806 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.806 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dps]="0"' 00:10:29.806 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dps]=0 00:10:29.806 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.806 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.806 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.806 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nmic]="0"' 00:10:29.806 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nmic]=0 00:10:29.806 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.806 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.806 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.806 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[rescap]="0"' 00:10:29.806 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[rescap]=0 00:10:29.806 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.806 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.806 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.806 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[fpi]="0"' 00:10:29.806 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[fpi]=0 00:10:29.806 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.806 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.806 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:29.806 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dlfeat]="1"' 00:10:29.806 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dlfeat]=1 00:10:29.806 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.806 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.806 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.806 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nawun]="0"' 00:10:29.806 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nawun]=0 00:10:29.806 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.806 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.806 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.806 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nawupf]="0"' 00:10:29.806 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nawupf]=0 00:10:29.806 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.806 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.806 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.806 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nacwu]="0"' 00:10:29.806 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nacwu]=0 00:10:29.806 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.806 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.806 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.806 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabsn]="0"' 00:10:29.806 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabsn]=0 00:10:29.806 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.806 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.806 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.806 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabo]="0"' 00:10:29.806 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabo]=0 00:10:29.806 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.806 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.806 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.806 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabspf]="0"' 00:10:29.806 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabspf]=0 00:10:29.806 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.806 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.806 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.806 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[noiob]="0"' 00:10:29.806 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[noiob]=0 00:10:29.806 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.806 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.806 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.806 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmcap]="0"' 00:10:29.806 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nvmcap]=0 00:10:29.806 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.806 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.806 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.806 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npwg]="0"' 00:10:29.806 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npwg]=0 00:10:29.806 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.806 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.806 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.806 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npwa]="0"' 00:10:29.806 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npwa]=0 00:10:29.806 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.806 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.806 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.806 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npdg]="0"' 00:10:29.806 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npdg]=0 00:10:29.806 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.806 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.806 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.806 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npda]="0"' 00:10:29.806 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npda]=0 00:10:29.806 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.806 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.806 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.806 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nows]="0"' 00:10:29.806 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nows]=0 00:10:29.806 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.806 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.806 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:29.806 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mssrl]="128"' 00:10:29.806 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mssrl]=128 00:10:29.806 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.806 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.806 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:29.806 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mcl]="128"' 00:10:29.806 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mcl]=128 00:10:29.806 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.806 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.806 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:29.806 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[msrc]="127"' 00:10:29.806 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[msrc]=127 00:10:29.806 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.806 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.806 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.806 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nulbaf]="0"' 00:10:29.806 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nulbaf]=0 00:10:29.806 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.806 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.806 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.806 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[anagrpid]="0"' 00:10:29.806 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[anagrpid]=0 00:10:29.806 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.806 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.806 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.806 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsattr]="0"' 00:10:29.806 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsattr]=0 00:10:29.806 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.806 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.806 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.806 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmsetid]="0"' 00:10:29.806 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nvmsetid]=0 00:10:29.806 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.806 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.806 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.806 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[endgid]="0"' 00:10:29.806 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[endgid]=0 00:10:29.806 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.806 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.806 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:29.806 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nguid]="00000000000000000000000000000000"' 00:10:29.807 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nguid]=00000000000000000000000000000000 00:10:29.807 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.807 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.807 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:29.807 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[eui64]="0000000000000000"' 00:10:29.807 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[eui64]=0000000000000000 00:10:29.807 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.807 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.807 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:29.807 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:29.807 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:29.807 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.807 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.807 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:29.807 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:29.807 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:29.807 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.807 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.807 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:29.807 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:29.807 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:29.807 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.807 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.807 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:29.807 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:29.807 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:29.807 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.807 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.807 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:10:29.807 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:10:29.807 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:10:29.807 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.807 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.807 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:29.807 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:29.807 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:29.807 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.807 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.807 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:29.807 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:29.807 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:29.807 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.807 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.807 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:10:29.807 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:10:29.807 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:10:29.807 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.807 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.807 04:27:25 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng1n1 00:10:29.807 04:27:25 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:29.807 04:27:25 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:10:29.807 04:27:25 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:10:29.807 04:27:25 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:10:29.807 04:27:25 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:10:29.807 04:27:25 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:29.807 04:27:25 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:10:29.807 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.807 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.807 04:27:25 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:10:29.807 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:29.807 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.807 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.807 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:29.807 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:10:29.807 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:10:29.807 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.807 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.807 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:29.807 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:10:29.807 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:10:29.807 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.807 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.807 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:29.807 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:10:29.807 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:10:29.807 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.807 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.807 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:29.807 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:10:29.807 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:10:29.807 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.807 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.807 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:29.807 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:10:29.807 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:10:29.807 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.807 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.807 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:29.807 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:10:29.807 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:10:29.807 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.807 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.808 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:29.808 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:10:29.808 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:10:29.808 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.808 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.808 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:29.808 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:10:29.808 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:10:29.808 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.808 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.808 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.808 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:10:29.808 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:10:29.808 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.808 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.808 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.808 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:10:29.808 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:10:29.808 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.808 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.808 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.808 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:10:29.808 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:10:29.808 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.808 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.808 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.808 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:10:29.808 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:10:29.808 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.808 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.808 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:29.808 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:10:29.808 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:10:29.808 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.808 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.808 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.808 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:10:29.808 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:10:29.808 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.808 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.808 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.808 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:10:29.808 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:10:29.808 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.808 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.808 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.808 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:10:29.808 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:10:29.808 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.808 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.808 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.808 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:10:29.808 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:10:29.808 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.808 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.808 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.808 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:10:29.808 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:10:29.808 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.808 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.808 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.808 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:10:29.808 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:10:29.808 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.808 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.808 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.808 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:10:29.808 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:10:29.808 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.808 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.808 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.808 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:10:29.808 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:10:29.808 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.808 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.808 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.808 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:10:29.808 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:10:29.808 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.808 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.808 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.808 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:10:29.808 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:10:29.808 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.808 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.808 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.808 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:10:29.808 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:10:29.808 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.808 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.808 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.808 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:10:29.808 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:10:29.808 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.808 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.808 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.808 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:10:29.808 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:10:29.808 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.809 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.809 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:29.809 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:10:29.809 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:10:29.809 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.809 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.809 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:29.809 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:10:29.809 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:10:29.809 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.809 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.809 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:29.809 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:10:29.809 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:10:29.809 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.809 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.809 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.809 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:10:29.809 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:10:29.809 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.809 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.809 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.809 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:10:29.809 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:10:29.809 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.809 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.809 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.809 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:10:29.809 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:10:29.809 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.809 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.809 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.809 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:10:29.809 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:10:29.809 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.809 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.809 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.809 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:10:29.809 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:10:29.809 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.809 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.809 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:29.809 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:10:29.809 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:10:29.809 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.809 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.809 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:29.809 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:10:29.809 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:10:29.809 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.809 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.809 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:29.809 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:29.809 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:29.809 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.809 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.809 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:29.809 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:29.809 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:29.809 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.809 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.809 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:29.809 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:29.809 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:29.809 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.809 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.809 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:29.809 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:29.809 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:29.809 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.809 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.809 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:10:29.809 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:10:29.809 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:10:29.809 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.809 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.809 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:29.809 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:29.809 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:29.809 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.809 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.809 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:29.809 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:29.809 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:29.809 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.809 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.809 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:10:29.809 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:10:29.809 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:10:29.809 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.809 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.809 04:27:25 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:10:29.810 04:27:25 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:10:29.810 04:27:25 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:10:29.810 04:27:25 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:10:29.810 04:27:25 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:10:29.810 04:27:25 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:29.810 04:27:25 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:10:29.810 04:27:25 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:10:29.810 04:27:25 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:10:29.810 04:27:25 nvme_scc -- scripts/common.sh@18 -- # local i 00:10:29.810 04:27:25 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:10:29.810 04:27:25 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:29.810 04:27:25 nvme_scc -- scripts/common.sh@27 -- # return 0 00:10:29.810 04:27:25 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:10:29.810 04:27:25 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:10:29.810 04:27:25 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:10:29.810 04:27:25 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:29.810 04:27:25 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:10:29.810 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.810 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.810 04:27:25 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:10:29.810 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:29.810 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.810 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.810 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:29.810 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:10:29.810 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:10:29.810 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.810 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.810 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:29.810 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:10:29.810 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:10:29.810 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.810 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.810 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:10:29.810 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:10:29.810 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:10:29.810 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.810 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.810 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:29.810 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:10:29.810 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:10:29.810 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.810 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.810 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:29.810 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:10:29.810 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:10:29.810 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.810 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.810 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:29.810 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:10:29.810 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:10:29.810 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.810 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.810 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:29.810 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:10:29.810 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:10:29.810 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.810 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.810 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.810 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:10:29.810 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:10:29.810 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.810 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.810 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:29.810 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:10:29.810 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:10:29.810 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.810 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.810 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.810 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:10:29.810 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:10:29.810 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.810 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.810 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:29.810 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:10:29.810 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:10:29.810 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.810 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.810 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.810 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:10:29.810 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:10:29.810 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.810 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.810 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.810 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:10:29.810 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:10:29.810 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.810 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.810 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:29.810 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:10:29.810 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:10:29.810 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.810 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.811 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:10:29.811 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:10:29.811 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:10:29.811 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.811 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.811 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.811 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:10:29.811 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:10:29.811 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.811 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.811 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:29.811 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:10:29.811 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:10:29.811 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.811 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.811 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:29.811 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:29.811 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:10:29.811 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.811 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.811 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.811 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:10:29.811 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:10:29.811 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.811 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.811 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.811 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:10:29.811 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:10:29.811 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.811 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.811 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.811 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:10:29.811 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:10:29.811 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.811 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.811 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.811 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:10:29.811 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:10:29.811 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.811 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.811 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.811 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:10:29.811 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:10:29.811 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.811 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.811 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.811 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:10:29.811 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:10:29.811 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.811 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.811 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:29.811 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:10:29.811 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:10:29.811 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.811 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.811 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:29.811 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:10:29.811 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:10:29.811 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.811 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.811 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:29.811 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:10:29.811 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:10:29.811 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.811 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.811 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:29.811 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:10:29.811 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:10:29.811 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.811 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.811 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:29.811 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:10:29.811 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:10:29.811 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.811 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.811 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.811 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:10:29.811 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:10:29.811 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.811 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.811 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.811 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:10:29.811 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:10:29.811 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.811 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.811 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.811 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:10:29.811 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:10:29.811 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.811 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.811 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.811 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:10:29.811 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:10:29.811 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.811 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.811 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:29.811 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:10:29.811 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:10:29.811 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.811 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.812 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:29.812 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:10:29.812 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:10:29.812 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.812 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.812 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.812 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:10:29.812 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:10:29.812 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.812 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.812 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.812 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:10:29.812 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:10:29.812 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.812 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.812 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.812 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:10:29.812 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:10:29.812 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.812 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.812 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.812 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:10:29.812 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:10:29.812 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.812 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.812 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.812 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:10:29.812 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:10:29.812 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.812 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.812 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.812 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:10:29.812 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:10:29.812 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.812 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.812 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.812 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:10:29.812 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:10:29.812 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.812 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.812 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.812 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:10:29.812 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:10:29.812 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.812 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.812 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.812 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:10:29.812 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:10:29.812 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.812 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.812 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.812 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:10:29.812 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:10:29.812 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.812 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.812 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.812 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:10:29.812 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:10:29.812 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.812 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.812 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.812 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:10:29.812 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:10:29.812 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.812 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.812 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.812 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:10:29.812 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:10:29.812 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.812 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.812 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.812 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:10:29.812 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:10:29.812 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.812 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.812 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.812 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:10:29.812 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:10:29.812 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.812 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.812 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.813 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:10:29.813 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:10:29.813 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.813 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.813 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.813 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:10:29.813 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:10:29.813 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.813 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.813 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.813 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:10:29.813 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:10:29.813 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.813 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.813 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.813 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:10:29.813 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:10:29.813 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.813 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.813 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.813 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:10:29.813 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:10:29.813 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.813 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.813 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.813 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:10:29.813 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:10:29.813 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.813 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.813 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.813 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:10:29.813 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:10:29.813 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.813 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.813 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.813 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:10:29.813 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:10:29.813 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.813 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.813 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.813 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:10:29.813 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:10:29.813 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.813 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.813 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.813 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:10:29.813 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:10:29.813 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.813 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.813 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:29.813 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:10:29.813 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:10:29.813 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.813 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.813 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:29.813 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:10:29.813 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:10:29.813 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.813 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.813 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.813 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:10:29.813 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:10:29.813 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.813 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.813 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:29.813 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:10:29.813 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:10:29.813 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.813 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.813 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:29.813 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:10:29.813 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:10:29.813 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.813 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.813 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.813 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:10:29.813 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:10:29.813 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.813 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.813 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.813 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:10:29.813 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:10:29.813 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.813 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.813 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:29.813 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:10:29.813 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:10:29.813 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.813 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.813 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.813 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:10:29.813 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:10:29.813 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.813 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.813 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.813 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:10:29.813 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:10:29.813 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.813 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.813 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.813 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:10:29.813 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:10:29.813 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.813 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.813 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.813 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:10:29.813 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:10:29.813 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.813 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.813 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.813 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:10:29.813 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:10:29.813 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.813 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.813 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:29.813 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:10:29.813 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:10:29.813 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.813 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.813 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:29.813 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:10:29.813 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:10:29.813 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.813 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.813 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.813 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:10:29.813 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:10:29.813 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.813 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.813 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.814 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:10:29.814 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:10:29.814 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.814 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.814 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.814 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:10:29.814 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:10:29.814 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.814 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.814 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:10:29.814 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:10:29.814 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:10:29.814 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.814 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.814 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.814 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:10:29.814 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:10:29.814 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.814 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.814 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.814 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:10:29.814 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:10:29.814 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.814 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.814 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.814 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:10:29.814 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:10:29.814 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.814 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.814 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.814 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:10:29.814 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:10:29.814 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.814 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.814 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.814 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:10:29.814 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:10:29.814 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.814 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.814 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.814 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:10:29.814 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:10:29.814 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.814 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.814 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:29.814 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:29.814 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:29.814 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.814 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.814 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:29.814 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:29.814 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:29.814 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.814 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.814 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:29.814 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:10:29.814 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:10:29.814 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.814 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.814 04:27:25 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:10:29.814 04:27:25 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:29.814 04:27:25 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n1 ]] 00:10:29.814 04:27:25 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n1 00:10:29.814 04:27:25 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n1 id-ns /dev/ng2n1 00:10:29.814 04:27:25 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n1 reg val 00:10:29.814 04:27:25 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:29.814 04:27:25 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n1=()' 00:10:29.814 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.814 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.814 04:27:25 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n1 00:10:29.814 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:29.814 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.814 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.814 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:29.814 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsze]="0x100000"' 00:10:29.814 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsze]=0x100000 00:10:29.814 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.814 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.814 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:29.814 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[ncap]="0x100000"' 00:10:29.814 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[ncap]=0x100000 00:10:29.814 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.814 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.814 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:29.814 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nuse]="0x100000"' 00:10:29.814 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nuse]=0x100000 00:10:29.814 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.814 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.814 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:29.814 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsfeat]="0x14"' 00:10:29.814 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsfeat]=0x14 00:10:29.814 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.814 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.814 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:29.814 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nlbaf]="7"' 00:10:29.814 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nlbaf]=7 00:10:29.814 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.814 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.814 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:29.814 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[flbas]="0x4"' 00:10:29.814 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[flbas]=0x4 00:10:29.814 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.814 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.814 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:29.814 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mc]="0x3"' 00:10:29.814 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mc]=0x3 00:10:29.814 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.814 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.814 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:29.814 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dpc]="0x1f"' 00:10:29.814 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dpc]=0x1f 00:10:29.814 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.814 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.814 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.814 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dps]="0"' 00:10:29.814 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dps]=0 00:10:29.814 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.814 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.814 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.814 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nmic]="0"' 00:10:29.814 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nmic]=0 00:10:29.814 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.814 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.814 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.814 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[rescap]="0"' 00:10:29.814 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[rescap]=0 00:10:29.814 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.814 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.814 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.814 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[fpi]="0"' 00:10:29.814 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[fpi]=0 00:10:29.814 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.814 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.814 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:29.814 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dlfeat]="1"' 00:10:29.814 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dlfeat]=1 00:10:29.814 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.814 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.814 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.814 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nawun]="0"' 00:10:29.814 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nawun]=0 00:10:29.814 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.814 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.814 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.814 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nawupf]="0"' 00:10:29.814 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nawupf]=0 00:10:29.814 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.814 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.814 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.814 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nacwu]="0"' 00:10:29.814 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nacwu]=0 00:10:29.814 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.815 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.815 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.815 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabsn]="0"' 00:10:29.815 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabsn]=0 00:10:29.815 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.815 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.815 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.815 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabo]="0"' 00:10:29.815 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabo]=0 00:10:29.815 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.815 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.815 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.815 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabspf]="0"' 00:10:29.815 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabspf]=0 00:10:29.815 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.815 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.815 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.815 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[noiob]="0"' 00:10:29.815 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[noiob]=0 00:10:29.815 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.815 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.815 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.815 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmcap]="0"' 00:10:29.815 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nvmcap]=0 00:10:29.815 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.815 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.815 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.815 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npwg]="0"' 00:10:29.815 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npwg]=0 00:10:29.815 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.815 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.815 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.815 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npwa]="0"' 00:10:29.815 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npwa]=0 00:10:29.815 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.815 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.815 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.815 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npdg]="0"' 00:10:29.815 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npdg]=0 00:10:29.815 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.815 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.815 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.815 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npda]="0"' 00:10:29.815 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npda]=0 00:10:29.815 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.815 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.815 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.815 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nows]="0"' 00:10:29.815 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nows]=0 00:10:29.815 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.815 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.815 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:29.815 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mssrl]="128"' 00:10:29.815 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mssrl]=128 00:10:29.815 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.815 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.815 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:29.815 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mcl]="128"' 00:10:29.815 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mcl]=128 00:10:29.815 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.815 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.815 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:29.815 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[msrc]="127"' 00:10:29.815 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[msrc]=127 00:10:29.815 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.815 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.815 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.815 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nulbaf]="0"' 00:10:29.815 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nulbaf]=0 00:10:29.815 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.815 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.815 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.815 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[anagrpid]="0"' 00:10:29.815 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[anagrpid]=0 00:10:29.815 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.815 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.815 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.815 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsattr]="0"' 00:10:29.815 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsattr]=0 00:10:29.815 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.815 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.815 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.815 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmsetid]="0"' 00:10:29.815 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nvmsetid]=0 00:10:29.815 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.815 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.815 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.815 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[endgid]="0"' 00:10:29.815 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[endgid]=0 00:10:29.815 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.815 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.815 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:29.815 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nguid]="00000000000000000000000000000000"' 00:10:29.815 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nguid]=00000000000000000000000000000000 00:10:29.815 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.815 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.815 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:29.815 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[eui64]="0000000000000000"' 00:10:29.815 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[eui64]=0000000000000000 00:10:29.815 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.815 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.815 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:29.815 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:29.815 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:29.815 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.815 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.815 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:29.815 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:29.815 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:29.815 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.815 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.815 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:29.815 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:29.815 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:29.815 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.815 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.815 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:29.815 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:29.815 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:29.815 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.815 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.815 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:29.815 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:29.815 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:29.815 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.815 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.815 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:29.815 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:29.815 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:29.815 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.815 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.815 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:29.815 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:29.815 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:29.815 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.815 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.815 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:29.815 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:29.815 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:29.815 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.815 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.815 04:27:25 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n1 00:10:29.815 04:27:25 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:29.815 04:27:25 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n2 ]] 00:10:29.815 04:27:25 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n2 00:10:29.815 04:27:25 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n2 id-ns /dev/ng2n2 00:10:29.815 04:27:25 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n2 reg val 00:10:29.815 04:27:25 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:29.815 04:27:25 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n2=()' 00:10:29.816 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.816 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.816 04:27:25 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n2 00:10:29.816 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:29.816 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.816 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.816 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:29.816 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsze]="0x100000"' 00:10:29.816 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsze]=0x100000 00:10:29.816 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.816 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.816 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:29.816 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[ncap]="0x100000"' 00:10:29.816 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[ncap]=0x100000 00:10:29.816 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.816 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.816 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:29.816 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nuse]="0x100000"' 00:10:29.816 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nuse]=0x100000 00:10:29.816 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.816 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.816 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:29.816 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsfeat]="0x14"' 00:10:29.816 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsfeat]=0x14 00:10:29.816 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.816 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.816 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:29.816 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nlbaf]="7"' 00:10:29.816 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nlbaf]=7 00:10:29.816 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.816 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.816 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:29.816 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[flbas]="0x4"' 00:10:29.816 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[flbas]=0x4 00:10:29.816 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.816 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.816 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:29.816 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mc]="0x3"' 00:10:29.816 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mc]=0x3 00:10:29.816 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.816 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.816 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:29.816 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dpc]="0x1f"' 00:10:29.816 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dpc]=0x1f 00:10:29.816 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.816 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.816 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.816 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dps]="0"' 00:10:29.816 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dps]=0 00:10:29.816 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.816 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.816 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.816 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nmic]="0"' 00:10:29.816 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nmic]=0 00:10:29.816 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.816 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.816 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.816 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[rescap]="0"' 00:10:29.816 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[rescap]=0 00:10:29.816 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.816 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.816 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.816 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[fpi]="0"' 00:10:29.816 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[fpi]=0 00:10:29.816 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.816 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.816 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:29.816 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dlfeat]="1"' 00:10:29.816 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dlfeat]=1 00:10:29.816 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.816 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.816 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.816 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nawun]="0"' 00:10:29.816 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nawun]=0 00:10:29.816 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.816 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.816 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.816 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nawupf]="0"' 00:10:29.816 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nawupf]=0 00:10:29.816 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.816 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.816 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.816 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nacwu]="0"' 00:10:29.816 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nacwu]=0 00:10:29.816 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.816 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.816 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.816 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabsn]="0"' 00:10:29.816 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabsn]=0 00:10:29.816 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.816 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.816 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.816 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabo]="0"' 00:10:29.816 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabo]=0 00:10:29.816 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.816 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.816 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.816 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabspf]="0"' 00:10:29.816 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabspf]=0 00:10:29.816 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.816 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.816 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.816 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[noiob]="0"' 00:10:29.816 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[noiob]=0 00:10:29.816 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.816 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.816 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.816 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmcap]="0"' 00:10:29.816 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nvmcap]=0 00:10:29.816 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.816 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.816 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.816 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npwg]="0"' 00:10:29.816 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npwg]=0 00:10:29.816 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.816 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.816 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.817 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npwa]="0"' 00:10:29.817 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npwa]=0 00:10:29.817 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.817 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.817 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.817 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npdg]="0"' 00:10:29.817 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npdg]=0 00:10:29.817 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.817 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.817 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.817 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npda]="0"' 00:10:29.817 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npda]=0 00:10:29.817 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.817 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.817 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.817 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nows]="0"' 00:10:29.817 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nows]=0 00:10:29.817 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.817 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.817 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:29.817 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mssrl]="128"' 00:10:29.817 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mssrl]=128 00:10:29.817 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.817 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.817 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:29.817 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mcl]="128"' 00:10:29.817 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mcl]=128 00:10:29.817 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.817 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.817 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:29.817 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[msrc]="127"' 00:10:29.817 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[msrc]=127 00:10:29.817 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.817 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.817 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.817 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nulbaf]="0"' 00:10:29.817 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nulbaf]=0 00:10:29.817 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.817 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.817 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.817 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[anagrpid]="0"' 00:10:29.817 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[anagrpid]=0 00:10:29.817 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.817 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.817 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.817 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsattr]="0"' 00:10:29.817 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsattr]=0 00:10:29.817 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.817 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.817 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.817 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmsetid]="0"' 00:10:29.817 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nvmsetid]=0 00:10:29.817 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.817 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.817 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.817 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[endgid]="0"' 00:10:29.817 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[endgid]=0 00:10:29.817 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.817 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.817 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:29.817 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nguid]="00000000000000000000000000000000"' 00:10:29.817 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nguid]=00000000000000000000000000000000 00:10:29.817 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.817 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.817 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:29.817 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[eui64]="0000000000000000"' 00:10:29.817 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[eui64]=0000000000000000 00:10:29.817 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.817 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.817 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:29.817 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:29.817 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:29.817 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.817 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.817 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:29.817 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:29.817 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:29.817 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.817 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.817 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:29.817 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:29.817 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:29.817 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.817 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.817 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:29.817 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:29.817 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:29.817 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.817 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.817 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:29.817 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:29.817 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:29.817 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.818 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.818 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:29.818 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:29.818 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:29.818 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.818 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.818 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:29.818 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:29.818 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:29.818 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.818 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.818 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:29.818 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:29.818 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:29.818 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.818 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.818 04:27:25 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n2 00:10:29.818 04:27:25 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:29.818 04:27:25 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n3 ]] 00:10:29.818 04:27:25 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n3 00:10:29.818 04:27:25 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n3 id-ns /dev/ng2n3 00:10:29.818 04:27:25 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n3 reg val 00:10:29.818 04:27:25 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:29.818 04:27:25 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n3=()' 00:10:29.818 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.818 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.818 04:27:25 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n3 00:10:29.818 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:29.818 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.818 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.818 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:29.818 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsze]="0x100000"' 00:10:29.818 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsze]=0x100000 00:10:29.818 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.818 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.818 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:29.818 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[ncap]="0x100000"' 00:10:29.818 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[ncap]=0x100000 00:10:29.818 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.818 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.818 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:29.818 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nuse]="0x100000"' 00:10:29.818 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nuse]=0x100000 00:10:29.818 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.818 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.818 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:29.818 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsfeat]="0x14"' 00:10:29.818 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsfeat]=0x14 00:10:29.818 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.818 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.818 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:29.818 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nlbaf]="7"' 00:10:29.818 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nlbaf]=7 00:10:29.818 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.818 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.818 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:29.818 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[flbas]="0x4"' 00:10:29.818 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[flbas]=0x4 00:10:29.818 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.818 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.818 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:29.818 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mc]="0x3"' 00:10:29.818 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mc]=0x3 00:10:29.818 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.818 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.818 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:29.818 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dpc]="0x1f"' 00:10:29.818 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dpc]=0x1f 00:10:29.818 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.818 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.818 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.818 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dps]="0"' 00:10:29.818 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dps]=0 00:10:29.818 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.818 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.818 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.819 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nmic]="0"' 00:10:29.819 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nmic]=0 00:10:29.819 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.819 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.819 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.819 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[rescap]="0"' 00:10:29.819 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[rescap]=0 00:10:29.819 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.819 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.819 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.819 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[fpi]="0"' 00:10:29.819 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[fpi]=0 00:10:29.819 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.819 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.819 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:29.819 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dlfeat]="1"' 00:10:29.819 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dlfeat]=1 00:10:29.819 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.819 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.819 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.819 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nawun]="0"' 00:10:29.819 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nawun]=0 00:10:29.819 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.819 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.819 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.819 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nawupf]="0"' 00:10:29.819 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nawupf]=0 00:10:29.819 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.819 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.819 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.819 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nacwu]="0"' 00:10:29.819 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nacwu]=0 00:10:29.819 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.819 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.819 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.819 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabsn]="0"' 00:10:29.819 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabsn]=0 00:10:29.819 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.819 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.819 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.819 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabo]="0"' 00:10:29.819 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabo]=0 00:10:29.819 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.819 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.819 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.819 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabspf]="0"' 00:10:29.819 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabspf]=0 00:10:29.819 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.819 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.819 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.819 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[noiob]="0"' 00:10:29.819 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[noiob]=0 00:10:29.819 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.819 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.819 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.819 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmcap]="0"' 00:10:29.819 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nvmcap]=0 00:10:29.819 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.819 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.819 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.819 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npwg]="0"' 00:10:29.819 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npwg]=0 00:10:29.819 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.819 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.819 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.819 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npwa]="0"' 00:10:29.819 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npwa]=0 00:10:29.819 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.819 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.819 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.819 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npdg]="0"' 00:10:29.819 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npdg]=0 00:10:29.819 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.819 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.819 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.819 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npda]="0"' 00:10:29.819 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npda]=0 00:10:29.819 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.819 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.819 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.819 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nows]="0"' 00:10:29.819 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nows]=0 00:10:29.819 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.819 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.819 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:29.819 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mssrl]="128"' 00:10:29.819 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mssrl]=128 00:10:29.819 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.819 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.819 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:29.819 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mcl]="128"' 00:10:29.819 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mcl]=128 00:10:29.819 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.819 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.819 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:29.820 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[msrc]="127"' 00:10:29.820 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[msrc]=127 00:10:29.820 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.820 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.820 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.820 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nulbaf]="0"' 00:10:29.820 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nulbaf]=0 00:10:29.820 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.820 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.820 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.820 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[anagrpid]="0"' 00:10:29.820 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[anagrpid]=0 00:10:29.820 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.820 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.820 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.820 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsattr]="0"' 00:10:29.820 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsattr]=0 00:10:29.820 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.820 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.820 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.820 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmsetid]="0"' 00:10:29.820 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nvmsetid]=0 00:10:29.820 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.820 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.820 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.820 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[endgid]="0"' 00:10:29.820 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[endgid]=0 00:10:29.820 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.820 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.820 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:29.820 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nguid]="00000000000000000000000000000000"' 00:10:29.820 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nguid]=00000000000000000000000000000000 00:10:29.820 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.820 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.820 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:29.820 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[eui64]="0000000000000000"' 00:10:29.820 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[eui64]=0000000000000000 00:10:29.820 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.820 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.820 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:29.820 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:29.820 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:29.820 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.820 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.820 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:29.820 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:29.820 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:29.820 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.820 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.820 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:29.820 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:29.820 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:29.820 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.820 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.820 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:29.820 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:29.820 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:29.820 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.820 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.820 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:29.820 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:29.820 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:29.820 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.820 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.820 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:29.820 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:29.820 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:29.820 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.820 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.820 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:29.820 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:29.820 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:29.820 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.820 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.820 04:27:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:29.820 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:29.820 04:27:25 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:29.820 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.820 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.820 04:27:25 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n3 00:10:29.820 04:27:25 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:29.820 04:27:25 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:10:29.820 04:27:25 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:10:29.820 04:27:25 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:10:29.820 04:27:25 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:10:29.820 04:27:25 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:29.820 04:27:25 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:10:29.820 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.820 04:27:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.820 04:27:25 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:10:29.820 04:27:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:29.820 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.820 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.820 04:27:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:29.821 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:10:29.821 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:10:29.821 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.821 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.821 04:27:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:29.821 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:10:29.821 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:10:29.821 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.821 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.821 04:27:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:29.821 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:10:29.821 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:10:29.821 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.821 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.821 04:27:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:29.821 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:10:29.821 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:10:29.821 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.821 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.821 04:27:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:29.821 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:10:29.821 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:10:29.821 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.821 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.821 04:27:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:29.821 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:10:29.821 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:10:29.821 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.821 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.821 04:27:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:29.821 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:10:29.821 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:10:29.821 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.821 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.821 04:27:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:29.821 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:10:29.821 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:10:29.821 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.821 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.821 04:27:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.821 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:10:29.821 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:10:29.821 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.821 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.821 04:27:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.821 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:10:29.821 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:10:29.821 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.821 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.821 04:27:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.821 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:10:29.821 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:10:29.821 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.821 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.821 04:27:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.821 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:10:29.821 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:10:29.821 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.821 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.821 04:27:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:29.821 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:10:29.821 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:10:29.821 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.821 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.821 04:27:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.821 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:10:29.821 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:10:29.821 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.821 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.821 04:27:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.821 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:10:29.821 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:10:29.821 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.821 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.821 04:27:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.821 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:10:29.821 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:10:29.821 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.821 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.821 04:27:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.821 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:10:29.821 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:10:29.821 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.821 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.821 04:27:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.821 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:10:29.821 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:10:29.821 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.821 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.821 04:27:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.821 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:10:29.821 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:10:29.821 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.821 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.821 04:27:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.821 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:10:29.821 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:10:29.821 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.821 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.821 04:27:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.821 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:10:29.821 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:10:29.822 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.822 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.822 04:27:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.822 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:10:29.822 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:10:29.822 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.822 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.822 04:27:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.822 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:10:29.822 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:10:29.822 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.822 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.822 04:27:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.822 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:10:29.822 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:10:29.822 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.822 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.822 04:27:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.822 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:10:29.822 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:10:29.822 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.822 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.822 04:27:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.822 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:10:29.822 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:10:29.822 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.822 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.822 04:27:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:29.822 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:10:29.822 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:10:29.822 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.822 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.822 04:27:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:29.822 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:10:29.822 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:10:29.822 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.822 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.822 04:27:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:29.822 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:10:29.822 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:10:29.822 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.822 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.822 04:27:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.822 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:10:29.822 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:10:29.822 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.822 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.822 04:27:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.822 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:10:29.822 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:10:29.822 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.822 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.822 04:27:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.822 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:10:29.822 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:10:29.822 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.822 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.822 04:27:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.822 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:10:29.822 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:10:29.822 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.822 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.822 04:27:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.822 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:10:29.822 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:10:29.822 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.822 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.822 04:27:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:29.822 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:10:29.822 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:10:29.822 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.822 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.822 04:27:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:29.822 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:10:29.822 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:10:29.822 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.822 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.822 04:27:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:29.822 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:29.822 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:29.822 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.822 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.822 04:27:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:29.822 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:29.822 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:29.822 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.822 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.822 04:27:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:29.822 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:29.822 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:29.822 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.822 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.822 04:27:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:29.822 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:29.822 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:29.822 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.822 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.822 04:27:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:29.822 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:29.822 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:29.822 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.822 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.822 04:27:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:29.822 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:29.822 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:29.822 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.822 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.822 04:27:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:29.822 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:29.822 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:29.822 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.822 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.822 04:27:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:29.822 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:29.822 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:29.822 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.822 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.822 04:27:26 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:10:29.823 04:27:26 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:29.823 04:27:26 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:10:29.823 04:27:26 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:10:29.823 04:27:26 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:10:29.823 04:27:26 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:10:29.823 04:27:26 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:29.823 04:27:26 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:10:29.823 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.823 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.823 04:27:26 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:10:29.823 04:27:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:29.823 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.823 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.823 04:27:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:29.823 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:10:29.823 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:10:29.823 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.823 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.823 04:27:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:29.823 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:10:29.823 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:10:29.823 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.823 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.823 04:27:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:29.823 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:10:29.823 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:10:29.823 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.823 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.823 04:27:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:29.823 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:10:29.823 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:10:29.823 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.823 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.823 04:27:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:29.823 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:10:29.823 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:10:29.823 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.823 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.823 04:27:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:29.823 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:10:29.823 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:10:29.823 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.823 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.823 04:27:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:29.823 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:10:29.823 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:10:29.823 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.823 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.823 04:27:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:29.823 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:10:29.823 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:10:29.823 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.823 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.823 04:27:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.823 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:10:29.823 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:10:29.823 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.823 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.823 04:27:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.823 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:10:29.823 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:10:29.823 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.823 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.823 04:27:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.823 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:10:29.823 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:10:29.823 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.823 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.823 04:27:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.823 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:10:29.823 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:10:29.823 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.823 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.823 04:27:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:29.823 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:10:29.823 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:10:29.823 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.823 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.823 04:27:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.823 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:10:29.823 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:10:29.823 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.823 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.823 04:27:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.823 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:10:29.823 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:10:29.823 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.823 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.823 04:27:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.823 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:10:29.823 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:10:29.823 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.823 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.823 04:27:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.823 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:10:29.823 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:10:29.823 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.823 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.823 04:27:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.823 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:10:29.823 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:10:29.823 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.823 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.823 04:27:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.823 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:10:29.823 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:10:29.823 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.823 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.823 04:27:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.823 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:10:29.823 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:10:29.823 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.823 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.823 04:27:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.823 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:10:29.823 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:10:29.823 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.823 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.823 04:27:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.823 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:10:29.823 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:10:29.823 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.823 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.823 04:27:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.823 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:10:29.823 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:10:29.823 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.823 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.823 04:27:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.823 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:10:29.823 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:10:29.823 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.823 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.823 04:27:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.823 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:10:29.823 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:10:29.823 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.824 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.824 04:27:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.824 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:10:29.824 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:10:29.824 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.824 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.824 04:27:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:29.824 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:10:29.824 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:10:29.824 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.824 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.824 04:27:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:29.824 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:10:29.824 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:10:29.824 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.824 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.824 04:27:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:29.824 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:10:29.824 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:10:29.824 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.824 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.824 04:27:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.824 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:10:29.824 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:10:29.824 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.824 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.824 04:27:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.824 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:10:29.824 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:10:29.824 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.824 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.824 04:27:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.824 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:10:29.824 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:10:29.824 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.824 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.824 04:27:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.824 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:10:29.824 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:10:29.824 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.824 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.824 04:27:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.824 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:10:29.824 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:10:29.824 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.824 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.824 04:27:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:29.824 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:10:29.824 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:10:29.824 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.824 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.824 04:27:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:29.824 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:10:29.824 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:10:29.824 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.824 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.824 04:27:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:29.824 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:29.824 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:29.824 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.824 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.824 04:27:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:29.824 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:29.824 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:29.824 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.824 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.824 04:27:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:29.824 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:29.824 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:29.824 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.824 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.824 04:27:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:29.824 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:29.824 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:29.824 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.824 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.824 04:27:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:29.824 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:29.824 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:29.824 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.824 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.824 04:27:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:29.824 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:29.824 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:29.824 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.824 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.824 04:27:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:29.824 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:29.824 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:29.824 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.824 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.824 04:27:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:29.824 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:29.824 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:29.824 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.824 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.824 04:27:26 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:10:29.824 04:27:26 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:29.824 04:27:26 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:10:29.824 04:27:26 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:10:29.824 04:27:26 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:10:29.824 04:27:26 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:10:29.824 04:27:26 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:29.824 04:27:26 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:10:29.824 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.824 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.824 04:27:26 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:10:29.824 04:27:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:29.824 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.824 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.824 04:27:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:29.824 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:10:29.824 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:10:29.824 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.824 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.824 04:27:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:29.824 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:10:29.824 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:10:29.824 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.824 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.824 04:27:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:29.824 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:10:29.824 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:10:29.824 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.824 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.824 04:27:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:29.824 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:10:29.825 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:10:29.825 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.825 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.825 04:27:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:29.825 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:10:29.825 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:10:29.825 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.825 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.825 04:27:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:29.825 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:10:29.825 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:10:29.825 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.825 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.825 04:27:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:29.825 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:10:29.825 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:10:29.825 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.825 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.825 04:27:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:29.825 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:10:29.825 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:10:29.825 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.825 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.825 04:27:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.825 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:10:29.825 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:10:29.825 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.825 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.825 04:27:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.825 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:10:29.825 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:10:29.825 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.825 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.825 04:27:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.825 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:10:29.825 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:10:29.825 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.825 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.825 04:27:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.825 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:10:29.825 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:10:29.825 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.825 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.825 04:27:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:29.825 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:10:29.825 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:10:29.825 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.825 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.825 04:27:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.825 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:10:29.825 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:10:29.825 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.825 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.825 04:27:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.825 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:10:29.825 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:10:29.825 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.825 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.825 04:27:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.825 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:10:29.825 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:10:29.825 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.825 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.825 04:27:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.825 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:10:29.825 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:10:29.825 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.825 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.825 04:27:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.825 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:10:29.825 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:10:29.825 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.825 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.825 04:27:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.825 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:10:29.825 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:10:29.825 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.825 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.825 04:27:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.825 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:10:29.825 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:10:29.825 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.825 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.825 04:27:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.825 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:10:29.825 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:10:29.825 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.825 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.825 04:27:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.825 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:10:29.825 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:10:29.825 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.825 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.825 04:27:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.825 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:10:29.825 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:10:29.825 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.825 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.825 04:27:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.825 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:10:29.825 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:10:29.825 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.825 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.825 04:27:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.825 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:10:29.825 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:10:29.825 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.825 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.825 04:27:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.825 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:10:29.825 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:10:29.825 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.825 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.825 04:27:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:29.825 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:10:29.825 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:10:29.825 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.825 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.825 04:27:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:29.825 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:10:29.825 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:10:29.825 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.825 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.825 04:27:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:29.825 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:10:29.825 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:10:29.825 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.825 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.825 04:27:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.825 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:10:29.825 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:10:29.825 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.825 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.825 04:27:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.825 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:10:29.825 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:10:29.825 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.825 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.825 04:27:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.825 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:10:29.825 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:10:29.825 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.825 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.825 04:27:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.825 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:10:29.825 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:10:29.825 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.825 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.825 04:27:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.825 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:10:29.825 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:10:29.825 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.825 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.825 04:27:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:29.825 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:10:29.826 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:10:29.826 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.826 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.826 04:27:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:29.826 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:10:29.826 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:10:29.826 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.826 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.826 04:27:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:29.826 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:29.826 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:29.826 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.826 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.826 04:27:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:29.826 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:29.826 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:29.826 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.826 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.826 04:27:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:29.826 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:29.826 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:29.826 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.826 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.826 04:27:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:29.826 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:29.826 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:29.826 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.826 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.826 04:27:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:29.826 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:29.826 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:29.826 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.826 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.826 04:27:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:29.826 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:29.826 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:29.826 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.826 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.826 04:27:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:29.826 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:29.826 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:29.826 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.826 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.826 04:27:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:29.826 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:29.826 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:29.826 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.826 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.826 04:27:26 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:10:29.826 04:27:26 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:10:29.826 04:27:26 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:10:29.826 04:27:26 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:10:29.826 04:27:26 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:10:29.826 04:27:26 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:29.826 04:27:26 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:10:29.826 04:27:26 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:10:29.826 04:27:26 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:10:29.826 04:27:26 nvme_scc -- scripts/common.sh@18 -- # local i 00:10:29.826 04:27:26 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:10:29.826 04:27:26 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:29.826 04:27:26 nvme_scc -- scripts/common.sh@27 -- # return 0 00:10:29.826 04:27:26 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:10:29.826 04:27:26 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:10:29.826 04:27:26 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:10:29.826 04:27:26 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:29.826 04:27:26 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:10:29.826 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.826 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.826 04:27:26 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:10:29.826 04:27:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:29.826 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.826 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.826 04:27:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:29.826 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:10:29.826 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:10:29.826 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.826 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.826 04:27:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:29.826 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:10:29.826 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:10:29.826 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.826 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.826 04:27:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:10:29.826 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:10:29.826 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:10:29.826 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.826 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.826 04:27:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:29.826 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:10:29.826 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:10:29.826 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.826 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.826 04:27:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:29.826 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:10:29.826 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:10:29.826 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.826 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.826 04:27:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:29.826 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:10:29.826 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:10:29.826 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.826 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.826 04:27:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:29.826 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:10:29.826 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:10:29.826 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.826 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.826 04:27:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:10:29.826 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:10:29.826 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:10:29.826 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.826 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.826 04:27:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:29.826 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:10:29.826 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:10:29.826 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.826 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.826 04:27:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.826 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:10:29.826 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:10:29.826 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.826 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.826 04:27:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:29.826 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:10:29.826 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:10:29.826 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.826 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.826 04:27:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.826 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:10:29.826 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:10:29.826 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.826 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.826 04:27:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.826 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:10:29.826 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:10:29.826 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.826 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.826 04:27:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:29.826 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:10:29.826 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:10:29.826 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.826 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.826 04:27:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:10:29.826 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:10:29.826 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:10:29.826 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.826 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.826 04:27:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.827 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:10:29.827 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:10:29.827 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.827 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.827 04:27:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:29.827 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:10:29.827 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:10:29.827 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.827 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.827 04:27:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:29.827 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:29.827 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:10:29.827 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.827 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.827 04:27:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.827 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:10:29.827 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:10:29.827 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.827 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.827 04:27:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.827 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:10:29.827 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:10:29.827 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.827 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.827 04:27:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.827 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:10:29.827 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:10:29.827 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.827 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.827 04:27:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.827 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:10:29.827 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:10:29.827 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.827 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.827 04:27:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.827 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:10:29.827 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:10:29.827 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.827 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.827 04:27:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.827 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:10:29.827 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:10:29.827 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.827 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.827 04:27:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:29.827 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:10:29.827 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:10:29.827 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.827 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.827 04:27:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:29.827 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:10:29.827 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:10:29.827 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.827 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.827 04:27:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:29.827 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:10:29.827 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:10:29.827 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.827 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.827 04:27:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:29.827 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:10:29.827 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:10:29.827 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.827 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.827 04:27:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:29.827 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:10:29.827 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:10:29.827 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.827 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.827 04:27:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.827 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:10:29.827 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:10:29.827 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.827 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.827 04:27:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.827 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:10:29.827 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:10:29.827 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.827 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.827 04:27:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.827 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:10:29.827 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:10:29.827 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.827 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.827 04:27:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.827 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:10:29.827 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:10:29.827 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.827 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.827 04:27:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:29.827 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:10:29.827 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:10:29.827 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.827 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.827 04:27:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:29.827 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:10:29.827 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:10:29.827 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.827 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.827 04:27:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.827 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:10:29.827 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:10:29.827 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.827 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.827 04:27:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.827 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:10:29.827 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:10:29.827 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.827 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.827 04:27:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.827 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:10:29.827 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:10:29.827 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.827 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.827 04:27:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.827 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:10:29.827 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:10:29.827 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.827 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.827 04:27:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.827 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:10:29.827 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:10:29.827 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.827 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.827 04:27:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.827 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:10:29.827 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:10:29.827 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.827 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.827 04:27:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.827 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:10:29.827 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:10:29.827 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.827 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.827 04:27:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.827 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:10:29.827 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:10:29.827 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.827 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.827 04:27:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.827 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:10:29.827 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:10:29.827 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.827 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.827 04:27:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.827 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:10:29.827 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:10:29.827 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.827 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.827 04:27:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.827 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:10:29.827 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:10:29.827 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.827 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.827 04:27:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.827 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:10:29.828 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:10:29.828 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.828 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.828 04:27:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.828 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:10:29.828 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:10:29.828 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.828 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.828 04:27:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.828 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:10:29.828 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:10:29.828 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.828 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.828 04:27:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.828 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:10:29.828 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:10:29.828 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.828 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.828 04:27:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.828 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:10:29.828 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:10:29.828 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.828 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.828 04:27:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.828 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:10:29.828 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:10:29.828 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.828 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.828 04:27:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:29.828 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:10:29.828 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:10:29.828 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.828 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.828 04:27:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.828 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:10:29.828 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:10:29.828 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.828 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.828 04:27:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.828 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:10:29.828 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:10:29.828 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.828 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.828 04:27:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.828 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:10:29.828 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:10:29.828 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.828 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.828 04:27:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.828 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:10:29.828 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:10:29.828 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.828 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.828 04:27:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.828 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:10:29.828 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:10:29.828 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.828 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.828 04:27:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.828 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:10:29.828 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:10:29.828 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.828 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.828 04:27:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.828 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:10:29.828 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:10:29.828 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.828 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.828 04:27:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:29.828 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:10:29.828 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:10:29.828 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.828 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.828 04:27:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:29.828 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:10:29.828 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:10:29.828 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.828 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.828 04:27:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.828 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:10:29.828 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:10:29.828 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.828 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.828 04:27:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:29.828 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:10:29.828 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:10:29.828 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.828 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.828 04:27:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:29.828 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:10:29.828 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:10:29.828 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.828 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.828 04:27:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.828 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:10:29.828 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:10:29.828 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.828 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.828 04:27:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.828 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:10:29.828 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:10:29.828 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.828 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.828 04:27:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:29.828 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:10:29.828 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:10:29.828 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.828 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.828 04:27:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.828 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:10:29.828 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:10:29.828 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.828 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.828 04:27:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.828 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:10:29.828 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:10:29.828 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.828 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.828 04:27:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.828 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:10:29.828 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:10:29.828 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.828 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.828 04:27:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.828 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:10:29.828 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:10:29.828 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.828 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.828 04:27:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.828 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:10:29.828 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:10:29.828 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.828 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.828 04:27:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:29.828 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:10:29.828 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:10:29.828 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.828 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.828 04:27:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:29.828 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:10:29.828 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:10:29.828 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.828 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.828 04:27:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.828 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:10:29.828 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:10:29.828 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.828 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.828 04:27:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.828 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:10:29.828 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:10:29.828 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.828 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.828 04:27:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.828 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:10:29.828 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:10:29.828 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.828 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.829 04:27:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:10:29.829 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:10:29.829 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:10:29.829 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.829 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.829 04:27:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.829 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:10:29.829 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:10:29.829 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.829 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.829 04:27:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.829 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:10:29.829 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:10:29.829 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.829 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.829 04:27:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.829 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:10:29.829 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:10:29.829 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.829 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.829 04:27:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.829 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:10:29.829 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:10:29.829 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.829 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.829 04:27:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.829 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:10:29.829 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:10:29.829 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.829 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.829 04:27:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.829 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:10:29.829 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:10:29.829 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.829 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.829 04:27:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:29.829 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:29.829 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:29.829 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.829 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.829 04:27:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:29.829 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:29.829 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:29.829 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.829 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.829 04:27:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:29.829 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:10:29.829 04:27:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:10:29.829 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.829 04:27:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.829 04:27:26 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:10:29.829 04:27:26 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:10:29.829 04:27:26 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:10:29.829 04:27:26 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:10:29.829 04:27:26 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:10:29.829 04:27:26 nvme_scc -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:10:29.829 04:27:26 nvme_scc -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc 00:10:29.829 04:27:26 nvme_scc -- nvme/functions.sh@204 -- # local _ctrls feature=scc 00:10:29.829 04:27:26 nvme_scc -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:10:29.829 04:27:26 nvme_scc -- nvme/functions.sh@206 -- # get_ctrls_with_feature scc 00:10:29.829 04:27:26 nvme_scc -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:10:29.829 04:27:26 nvme_scc -- nvme/functions.sh@194 -- # local ctrl feature=scc 00:10:29.829 04:27:26 nvme_scc -- nvme/functions.sh@196 -- # type -t ctrl_has_scc 00:10:29.829 04:27:26 nvme_scc -- nvme/functions.sh@196 -- # [[ function == function ]] 00:10:29.829 04:27:26 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:10:29.829 04:27:26 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme1 00:10:29.829 04:27:26 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme1 oncs 00:10:29.829 04:27:26 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme1 00:10:29.829 04:27:26 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme1 00:10:29.829 04:27:26 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme1 oncs 00:10:29.829 04:27:26 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=oncs 00:10:29.829 04:27:26 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:10:29.829 04:27:26 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:10:29.829 04:27:26 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:10:29.829 04:27:26 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:10:29.829 04:27:26 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:10:29.829 04:27:26 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:10:29.829 04:27:26 nvme_scc -- nvme/functions.sh@199 -- # echo nvme1 00:10:29.829 04:27:26 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:10:29.829 04:27:26 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme0 00:10:29.829 04:27:26 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme0 oncs 00:10:29.829 04:27:26 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme0 00:10:29.829 04:27:26 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme0 00:10:29.829 04:27:26 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme0 oncs 00:10:29.829 04:27:26 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 00:10:29.829 04:27:26 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:10:29.829 04:27:26 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:10:29.829 04:27:26 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:10:29.829 04:27:26 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:10:29.829 04:27:26 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:10:29.829 04:27:26 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:10:29.829 04:27:26 nvme_scc -- nvme/functions.sh@199 -- # echo nvme0 00:10:29.829 04:27:26 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:10:29.829 04:27:26 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme3 00:10:29.829 04:27:26 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme3 oncs 00:10:29.829 04:27:26 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme3 00:10:29.829 04:27:26 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme3 00:10:29.829 04:27:26 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme3 oncs 00:10:29.829 04:27:26 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=oncs 00:10:29.829 04:27:26 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:10:29.829 04:27:26 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:10:29.829 04:27:26 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:10:29.829 04:27:26 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:10:29.829 04:27:26 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:10:29.829 04:27:26 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:10:29.829 04:27:26 nvme_scc -- nvme/functions.sh@199 -- # echo nvme3 00:10:29.829 04:27:26 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:10:29.829 04:27:26 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme2 00:10:29.829 04:27:26 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme2 oncs 00:10:29.829 04:27:26 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme2 00:10:29.829 04:27:26 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme2 00:10:29.829 04:27:26 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme2 oncs 00:10:29.829 04:27:26 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=oncs 00:10:29.829 04:27:26 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:10:29.829 04:27:26 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:10:29.829 04:27:26 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:10:29.829 04:27:26 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:10:29.829 04:27:26 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:10:29.829 04:27:26 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:10:29.829 04:27:26 nvme_scc -- nvme/functions.sh@199 -- # echo nvme2 00:10:29.829 04:27:26 nvme_scc -- nvme/functions.sh@207 -- # (( 4 > 0 )) 00:10:29.829 04:27:26 nvme_scc -- nvme/functions.sh@208 -- # echo nvme1 00:10:29.829 04:27:26 nvme_scc -- nvme/functions.sh@209 -- # return 0 00:10:29.829 04:27:26 nvme_scc -- nvme/nvme_scc.sh@17 -- # ctrl=nvme1 00:10:29.829 04:27:26 nvme_scc -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:10.0 00:10:29.829 04:27:26 nvme_scc -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:10:30.088 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:30.659 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:10:30.659 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:10:30.659 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:10:30.659 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:10:30.659 04:27:27 nvme_scc -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:10:30.659 04:27:27 nvme_scc -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:30.659 04:27:27 nvme_scc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:30.659 04:27:27 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:10:30.659 ************************************ 00:10:30.659 START TEST nvme_simple_copy 00:10:30.659 ************************************ 00:10:30.659 04:27:27 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:10:30.918 Initializing NVMe Controllers 00:10:30.918 Attaching to 0000:00:10.0 00:10:30.918 Controller supports SCC. Attached to 0000:00:10.0 00:10:30.918 Namespace ID: 1 size: 6GB 00:10:30.918 Initialization complete. 00:10:30.918 00:10:30.918 Controller QEMU NVMe Ctrl (12340 ) 00:10:30.918 Controller PCI vendor:6966 PCI subsystem vendor:6900 00:10:30.918 Namespace Block Size:4096 00:10:30.918 Writing LBAs 0 to 63 with Random Data 00:10:30.918 Copied LBAs from 0 - 63 to the Destination LBA 256 00:10:30.918 LBAs matching Written Data: 64 00:10:30.918 00:10:30.918 real 0m0.249s 00:10:30.918 user 0m0.091s 00:10:30.918 sys 0m0.057s 00:10:30.918 04:27:27 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:30.918 04:27:27 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@10 -- # set +x 00:10:30.918 ************************************ 00:10:30.918 END TEST nvme_simple_copy 00:10:30.918 ************************************ 00:10:30.918 ************************************ 00:10:30.918 END TEST nvme_scc 00:10:30.918 ************************************ 00:10:30.918 00:10:30.918 real 0m7.430s 00:10:30.918 user 0m0.958s 00:10:30.918 sys 0m1.254s 00:10:30.918 04:27:27 nvme_scc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:30.918 04:27:27 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:10:30.918 04:27:27 -- spdk/autotest.sh@219 -- # [[ 0 -eq 1 ]] 00:10:30.918 04:27:27 -- spdk/autotest.sh@222 -- # [[ 0 -eq 1 ]] 00:10:30.918 04:27:27 -- spdk/autotest.sh@225 -- # [[ '' -eq 1 ]] 00:10:30.918 04:27:27 -- spdk/autotest.sh@228 -- # [[ 1 -eq 1 ]] 00:10:30.918 04:27:27 -- spdk/autotest.sh@229 -- # run_test nvme_fdp test/nvme/nvme_fdp.sh 00:10:30.918 04:27:27 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:30.918 04:27:27 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:30.918 04:27:27 -- common/autotest_common.sh@10 -- # set +x 00:10:30.918 ************************************ 00:10:30.918 START TEST nvme_fdp 00:10:30.918 ************************************ 00:10:30.918 04:27:27 nvme_fdp -- common/autotest_common.sh@1129 -- # test/nvme/nvme_fdp.sh 00:10:30.918 * Looking for test storage... 00:10:30.918 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:10:30.918 04:27:27 nvme_fdp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:30.918 04:27:27 nvme_fdp -- common/autotest_common.sh@1693 -- # lcov --version 00:10:30.918 04:27:27 nvme_fdp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:31.176 04:27:27 nvme_fdp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:31.176 04:27:27 nvme_fdp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:31.176 04:27:27 nvme_fdp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:31.176 04:27:27 nvme_fdp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:31.176 04:27:27 nvme_fdp -- scripts/common.sh@336 -- # IFS=.-: 00:10:31.176 04:27:27 nvme_fdp -- scripts/common.sh@336 -- # read -ra ver1 00:10:31.176 04:27:27 nvme_fdp -- scripts/common.sh@337 -- # IFS=.-: 00:10:31.176 04:27:27 nvme_fdp -- scripts/common.sh@337 -- # read -ra ver2 00:10:31.176 04:27:27 nvme_fdp -- scripts/common.sh@338 -- # local 'op=<' 00:10:31.176 04:27:27 nvme_fdp -- scripts/common.sh@340 -- # ver1_l=2 00:10:31.176 04:27:27 nvme_fdp -- scripts/common.sh@341 -- # ver2_l=1 00:10:31.176 04:27:27 nvme_fdp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:31.176 04:27:27 nvme_fdp -- scripts/common.sh@344 -- # case "$op" in 00:10:31.176 04:27:27 nvme_fdp -- scripts/common.sh@345 -- # : 1 00:10:31.176 04:27:27 nvme_fdp -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:31.176 04:27:27 nvme_fdp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:31.176 04:27:27 nvme_fdp -- scripts/common.sh@365 -- # decimal 1 00:10:31.176 04:27:27 nvme_fdp -- scripts/common.sh@353 -- # local d=1 00:10:31.176 04:27:27 nvme_fdp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:31.176 04:27:27 nvme_fdp -- scripts/common.sh@355 -- # echo 1 00:10:31.176 04:27:27 nvme_fdp -- scripts/common.sh@365 -- # ver1[v]=1 00:10:31.176 04:27:27 nvme_fdp -- scripts/common.sh@366 -- # decimal 2 00:10:31.176 04:27:27 nvme_fdp -- scripts/common.sh@353 -- # local d=2 00:10:31.176 04:27:27 nvme_fdp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:31.176 04:27:27 nvme_fdp -- scripts/common.sh@355 -- # echo 2 00:10:31.176 04:27:27 nvme_fdp -- scripts/common.sh@366 -- # ver2[v]=2 00:10:31.176 04:27:27 nvme_fdp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:31.176 04:27:27 nvme_fdp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:31.176 04:27:27 nvme_fdp -- scripts/common.sh@368 -- # return 0 00:10:31.176 04:27:27 nvme_fdp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:31.176 04:27:27 nvme_fdp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:31.176 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:31.176 --rc genhtml_branch_coverage=1 00:10:31.176 --rc genhtml_function_coverage=1 00:10:31.176 --rc genhtml_legend=1 00:10:31.176 --rc geninfo_all_blocks=1 00:10:31.176 --rc geninfo_unexecuted_blocks=1 00:10:31.176 00:10:31.176 ' 00:10:31.176 04:27:27 nvme_fdp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:31.176 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:31.177 --rc genhtml_branch_coverage=1 00:10:31.177 --rc genhtml_function_coverage=1 00:10:31.177 --rc genhtml_legend=1 00:10:31.177 --rc geninfo_all_blocks=1 00:10:31.177 --rc geninfo_unexecuted_blocks=1 00:10:31.177 00:10:31.177 ' 00:10:31.177 04:27:27 nvme_fdp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:31.177 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:31.177 --rc genhtml_branch_coverage=1 00:10:31.177 --rc genhtml_function_coverage=1 00:10:31.177 --rc genhtml_legend=1 00:10:31.177 --rc geninfo_all_blocks=1 00:10:31.177 --rc geninfo_unexecuted_blocks=1 00:10:31.177 00:10:31.177 ' 00:10:31.177 04:27:27 nvme_fdp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:31.177 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:31.177 --rc genhtml_branch_coverage=1 00:10:31.177 --rc genhtml_function_coverage=1 00:10:31.177 --rc genhtml_legend=1 00:10:31.177 --rc geninfo_all_blocks=1 00:10:31.177 --rc geninfo_unexecuted_blocks=1 00:10:31.177 00:10:31.177 ' 00:10:31.177 04:27:27 nvme_fdp -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:10:31.177 04:27:27 nvme_fdp -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:10:31.177 04:27:27 nvme_fdp -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:10:31.177 04:27:27 nvme_fdp -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:10:31.177 04:27:27 nvme_fdp -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:31.177 04:27:27 nvme_fdp -- scripts/common.sh@15 -- # shopt -s extglob 00:10:31.177 04:27:27 nvme_fdp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:31.177 04:27:27 nvme_fdp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:31.177 04:27:27 nvme_fdp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:31.177 04:27:27 nvme_fdp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:31.177 04:27:27 nvme_fdp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:31.177 04:27:27 nvme_fdp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:31.177 04:27:27 nvme_fdp -- paths/export.sh@5 -- # export PATH 00:10:31.177 04:27:27 nvme_fdp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:31.177 04:27:27 nvme_fdp -- nvme/functions.sh@10 -- # ctrls=() 00:10:31.177 04:27:27 nvme_fdp -- nvme/functions.sh@10 -- # declare -A ctrls 00:10:31.177 04:27:27 nvme_fdp -- nvme/functions.sh@11 -- # nvmes=() 00:10:31.177 04:27:27 nvme_fdp -- nvme/functions.sh@11 -- # declare -A nvmes 00:10:31.177 04:27:27 nvme_fdp -- nvme/functions.sh@12 -- # bdfs=() 00:10:31.177 04:27:27 nvme_fdp -- nvme/functions.sh@12 -- # declare -A bdfs 00:10:31.177 04:27:27 nvme_fdp -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:10:31.177 04:27:27 nvme_fdp -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:10:31.177 04:27:27 nvme_fdp -- nvme/functions.sh@14 -- # nvme_name= 00:10:31.177 04:27:27 nvme_fdp -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:31.177 04:27:27 nvme_fdp -- nvme/nvme_fdp.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:10:31.434 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:31.434 Waiting for block devices as requested 00:10:31.693 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:10:31.693 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:10:31.693 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:10:31.693 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:10:36.970 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:10:36.970 04:27:33 nvme_fdp -- nvme/nvme_fdp.sh@12 -- # scan_nvme_ctrls 00:10:36.970 04:27:33 nvme_fdp -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:10:36.970 04:27:33 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:36.970 04:27:33 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:10:36.970 04:27:33 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:10:36.970 04:27:33 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:10:36.970 04:27:33 nvme_fdp -- scripts/common.sh@18 -- # local i 00:10:36.970 04:27:33 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:10:36.970 04:27:33 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:36.970 04:27:33 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:10:36.970 04:27:33 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:10:36.970 04:27:33 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:10:36.970 04:27:33 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:10:36.970 04:27:33 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:36.970 04:27:33 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:10:36.970 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.970 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.970 04:27:33 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:10:36.970 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:36.970 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.970 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.970 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:36.970 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:10:36.970 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:10:36.970 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.970 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.970 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:36.970 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:10:36.970 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:10:36.970 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.970 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.970 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:10:36.970 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:10:36.970 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:10:36.970 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.970 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.970 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:36.970 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:10:36.970 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:10:36.970 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.970 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.970 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:36.970 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:10:36.970 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:10:36.970 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.970 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.970 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:36.970 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:10:36.970 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:10:36.970 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.970 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.970 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:36.970 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:10:36.970 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:10:36.970 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.970 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.970 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.970 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:10:36.970 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:10:36.970 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.970 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.970 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:36.970 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:10:36.970 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:10:36.970 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.970 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.970 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.970 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:10:36.970 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:10:36.970 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.970 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.970 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:36.970 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:10:36.970 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:10:36.970 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.970 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.970 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.970 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:10:36.970 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:10:36.970 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.970 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.970 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.970 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:10:36.970 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:10:36.970 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.970 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.970 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:36.970 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:10:36.970 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:10:36.970 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.970 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.970 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:10:36.970 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:10:36.971 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:10:36.971 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.971 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.971 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.971 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:10:36.971 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:10:36.971 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.971 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.971 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:36.971 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:10:36.971 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:10:36.971 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.971 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.971 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:36.971 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:36.971 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:10:36.971 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.971 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.971 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.971 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:10:36.971 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:10:36.971 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.971 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.971 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.971 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:10:36.971 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:10:36.971 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.971 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.971 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.971 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:10:36.971 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:10:36.971 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.971 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.971 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.971 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:10:36.971 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:10:36.971 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.971 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.971 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.971 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:10:36.971 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:10:36.971 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.971 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.971 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.971 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:10:36.971 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:10:36.971 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.971 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.971 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:36.971 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:10:36.971 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:10:36.971 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.971 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.971 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:36.971 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:10:36.971 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:10:36.971 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.971 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.971 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:36.971 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:10:36.971 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:10:36.971 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.971 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.971 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:36.971 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:10:36.971 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:10:36.971 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.971 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.971 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:36.971 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:10:36.971 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:10:36.971 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.971 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.971 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.971 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:10:36.971 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:10:36.971 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.971 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.971 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.971 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:10:36.971 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:10:36.971 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.971 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.971 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.971 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:10:36.971 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:10:36.971 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.971 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.971 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.971 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:10:36.971 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:10:36.971 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.971 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.971 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:36.971 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:10:36.971 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:10:36.971 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.971 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.971 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:36.971 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:10:36.971 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:10:36.971 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.971 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.971 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.971 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:10:36.971 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:10:36.971 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.971 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.971 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.971 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:10:36.971 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:10:36.971 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.971 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.971 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.971 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:10:36.971 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:10:36.971 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.971 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.971 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.971 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:10:36.971 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:10:36.971 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.971 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.971 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.971 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:10:36.971 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:10:36.971 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.971 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.971 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.971 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:10:36.971 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:10:36.971 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.971 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.971 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.971 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:10:36.971 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:10:36.971 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.971 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.971 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.971 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:10:36.971 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:10:36.971 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.971 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.971 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.971 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:10:36.971 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:10:36.971 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.971 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.971 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.971 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:10:36.971 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:10:36.971 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.971 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.971 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.971 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:10:36.972 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:10:36.972 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.972 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.972 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.972 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:10:36.972 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:10:36.972 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.972 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.972 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.972 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:10:36.972 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:10:36.972 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.972 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.972 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.972 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:10:36.972 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:10:36.972 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.972 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.972 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.972 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:10:36.972 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:10:36.972 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.972 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.972 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.972 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:10:36.972 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:10:36.972 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.972 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.972 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.972 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:10:36.972 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:10:36.972 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.972 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.972 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.972 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:10:36.972 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:10:36.972 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.972 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.972 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.972 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:10:36.972 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:10:36.972 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.972 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.972 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.972 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:10:36.972 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:10:36.972 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.972 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.972 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.972 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:10:36.972 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:10:36.972 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.972 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.972 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.972 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:10:36.972 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:10:36.972 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.972 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.972 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.972 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:10:36.972 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:10:36.972 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.972 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.972 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.972 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:10:36.972 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:10:36.972 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.972 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.972 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.972 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:10:36.972 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:10:36.972 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.972 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.972 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:36.972 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:10:36.972 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:10:36.972 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.972 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.972 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:36.972 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:10:36.972 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:10:36.972 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.972 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.972 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.972 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:10:36.972 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:10:36.972 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.972 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.972 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:36.972 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:10:36.972 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:10:36.972 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.972 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.972 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:36.972 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:10:36.972 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:10:36.972 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.972 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.972 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.972 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:10:36.972 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:10:36.972 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.972 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.972 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.972 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:10:36.972 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:10:36.972 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.972 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.972 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:36.972 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:10:36.972 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:10:36.972 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.972 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.972 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.972 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:10:36.972 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:10:36.972 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.972 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.972 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.972 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:10:36.972 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:10:36.972 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.972 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.972 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.972 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:10:36.972 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:10:36.972 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.972 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.972 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.972 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:10:36.972 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:10:36.972 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.972 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.972 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.972 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:10:36.972 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:10:36.972 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.972 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.972 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:36.972 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:10:36.972 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:10:36.972 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.972 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.972 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:36.972 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:10:36.972 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:10:36.972 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.972 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.972 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.972 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:10:36.972 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:10:36.972 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.972 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.972 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.972 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:10:36.972 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:10:36.972 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.973 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.973 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.973 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:10:36.973 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:10:36.973 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.973 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.973 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:10:36.973 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:10:36.973 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:10:36.973 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.973 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.973 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.973 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:10:36.973 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:10:36.973 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.973 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.973 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.973 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:10:36.973 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:10:36.973 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.973 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.973 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.973 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:10:36.973 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:10:36.973 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.973 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.973 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.973 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:10:36.973 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:10:36.973 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.973 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.973 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.973 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:10:36.973 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:10:36.973 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.973 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.973 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.973 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:10:36.973 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:10:36.973 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.973 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.973 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:36.973 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:36.973 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:36.973 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.973 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.973 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:36.973 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:36.973 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:36.973 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.973 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.973 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:36.973 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:10:36.973 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:10:36.973 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.973 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.973 04:27:33 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:10:36.973 04:27:33 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:36.973 04:27:33 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/ng0n1 ]] 00:10:36.973 04:27:33 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng0n1 00:10:36.973 04:27:33 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng0n1 id-ns /dev/ng0n1 00:10:36.973 04:27:33 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng0n1 reg val 00:10:36.973 04:27:33 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:36.973 04:27:33 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng0n1=()' 00:10:36.973 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.973 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.973 04:27:33 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng0n1 00:10:36.973 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:36.973 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.973 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.973 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:36.973 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsze]="0x140000"' 00:10:36.973 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsze]=0x140000 00:10:36.973 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.973 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.973 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:36.973 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[ncap]="0x140000"' 00:10:36.973 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[ncap]=0x140000 00:10:36.973 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.973 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.973 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:36.973 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nuse]="0x140000"' 00:10:36.973 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nuse]=0x140000 00:10:36.973 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.973 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.973 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:36.973 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsfeat]="0x14"' 00:10:36.973 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsfeat]=0x14 00:10:36.973 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.973 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.973 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:36.973 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nlbaf]="7"' 00:10:36.973 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nlbaf]=7 00:10:36.973 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.973 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.973 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:36.973 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[flbas]="0x4"' 00:10:36.973 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[flbas]=0x4 00:10:36.973 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.973 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.973 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:36.973 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mc]="0x3"' 00:10:36.973 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mc]=0x3 00:10:36.973 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.973 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.973 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:36.973 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dpc]="0x1f"' 00:10:36.973 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dpc]=0x1f 00:10:36.973 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.973 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.973 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.973 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dps]="0"' 00:10:36.973 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dps]=0 00:10:36.973 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.973 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.973 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.973 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nmic]="0"' 00:10:36.973 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nmic]=0 00:10:36.973 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.973 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.973 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.973 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[rescap]="0"' 00:10:36.973 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[rescap]=0 00:10:36.973 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.973 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.973 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.973 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[fpi]="0"' 00:10:36.973 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[fpi]=0 00:10:36.973 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.973 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.973 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:36.973 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dlfeat]="1"' 00:10:36.973 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dlfeat]=1 00:10:36.973 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.973 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.973 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.973 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nawun]="0"' 00:10:36.973 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nawun]=0 00:10:36.973 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.973 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.973 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.973 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nawupf]="0"' 00:10:36.973 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nawupf]=0 00:10:36.973 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.973 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.973 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.973 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nacwu]="0"' 00:10:36.973 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nacwu]=0 00:10:36.973 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.973 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.973 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.973 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabsn]="0"' 00:10:36.973 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabsn]=0 00:10:36.973 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.974 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.974 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.974 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabo]="0"' 00:10:36.974 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabo]=0 00:10:36.974 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.974 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.974 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.974 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabspf]="0"' 00:10:36.974 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabspf]=0 00:10:36.974 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.974 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.974 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.974 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[noiob]="0"' 00:10:36.974 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[noiob]=0 00:10:36.974 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.974 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.974 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.974 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmcap]="0"' 00:10:36.974 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nvmcap]=0 00:10:36.974 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.974 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.974 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.974 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npwg]="0"' 00:10:36.974 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npwg]=0 00:10:36.974 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.974 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.974 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.974 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npwa]="0"' 00:10:36.974 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npwa]=0 00:10:36.974 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.974 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.974 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.974 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npdg]="0"' 00:10:36.974 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npdg]=0 00:10:36.974 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.974 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.974 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.974 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npda]="0"' 00:10:36.974 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npda]=0 00:10:36.974 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.974 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.974 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.974 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nows]="0"' 00:10:36.974 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nows]=0 00:10:36.974 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.974 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.974 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:36.974 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mssrl]="128"' 00:10:36.974 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mssrl]=128 00:10:36.974 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.974 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.974 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:36.974 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mcl]="128"' 00:10:36.974 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mcl]=128 00:10:36.974 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.974 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.974 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:36.974 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[msrc]="127"' 00:10:36.974 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[msrc]=127 00:10:36.974 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.974 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.974 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.974 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nulbaf]="0"' 00:10:36.974 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nulbaf]=0 00:10:36.974 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.974 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.974 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.974 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[anagrpid]="0"' 00:10:36.974 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[anagrpid]=0 00:10:36.974 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.974 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.974 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.974 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsattr]="0"' 00:10:36.974 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsattr]=0 00:10:36.974 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.974 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.974 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.974 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmsetid]="0"' 00:10:36.974 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nvmsetid]=0 00:10:36.974 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.974 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.974 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.974 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[endgid]="0"' 00:10:36.974 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[endgid]=0 00:10:36.974 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.974 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.974 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:36.974 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nguid]="00000000000000000000000000000000"' 00:10:36.974 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nguid]=00000000000000000000000000000000 00:10:36.974 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.974 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.974 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:36.974 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[eui64]="0000000000000000"' 00:10:36.974 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[eui64]=0000000000000000 00:10:36.974 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.974 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.974 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:36.974 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:36.974 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:36.974 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.974 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.974 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:36.974 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:36.974 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:36.974 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.974 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.974 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:36.974 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:36.974 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:36.974 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.974 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.974 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:36.974 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:36.974 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:36.974 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.974 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.974 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:36.974 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:36.974 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:36.974 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.974 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.974 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:36.974 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:36.974 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:36.974 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.974 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.974 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:36.974 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:36.974 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:36.974 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.974 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.974 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:36.974 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:36.974 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:36.974 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.974 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.974 04:27:33 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng0n1 00:10:36.974 04:27:33 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:36.974 04:27:33 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:10:36.974 04:27:33 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:10:36.974 04:27:33 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:10:36.974 04:27:33 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:10:36.974 04:27:33 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:36.974 04:27:33 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:10:36.974 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.974 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.974 04:27:33 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:10:36.974 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:36.974 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.975 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.975 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:36.975 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:10:36.975 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:10:36.975 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.975 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.975 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:36.975 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:10:36.975 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:10:36.975 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.975 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.975 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:36.975 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:10:36.975 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:10:36.975 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.975 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.975 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:36.975 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:10:36.975 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:10:36.975 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.975 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.975 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:36.975 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:10:36.975 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:10:36.975 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.975 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.975 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:36.975 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:10:36.975 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:10:36.975 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.975 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.975 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:36.975 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:10:36.975 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:10:36.975 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.975 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.975 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:36.975 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:10:36.975 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:10:36.975 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.975 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.975 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.975 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:10:36.975 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:10:36.975 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.975 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.975 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.975 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:10:36.975 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:10:36.975 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.975 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.975 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.975 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:10:36.975 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:10:36.975 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.975 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.975 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.975 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:10:36.975 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:10:36.975 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.975 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.975 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:36.975 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:10:36.975 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:10:36.975 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.975 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.975 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.975 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:10:36.975 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:10:36.975 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.975 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.975 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.975 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:10:36.975 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:10:36.975 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.975 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.975 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.975 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:10:36.975 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:10:36.975 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.975 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.975 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.975 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:10:36.975 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:10:36.975 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.975 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.975 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.975 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:10:36.975 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:10:36.975 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.975 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.975 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.975 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:10:36.975 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:10:36.975 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.975 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.975 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.975 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:10:36.975 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:10:36.975 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.975 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.975 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.975 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:10:36.975 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:10:36.975 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.975 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.975 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.975 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:10:36.975 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:10:36.975 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.975 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.975 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.975 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:10:36.975 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:10:36.975 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.975 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.975 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.975 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:10:36.975 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:10:36.975 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.975 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.975 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.975 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:10:36.975 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:10:36.975 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.975 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.975 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.975 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:10:36.975 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:10:36.976 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.976 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.976 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:36.976 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:10:36.976 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:10:36.976 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.976 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.976 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:36.976 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:10:36.976 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:10:36.976 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.976 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.976 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:36.976 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:10:36.976 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:10:36.976 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.976 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.976 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.976 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:10:36.976 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:10:36.976 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.976 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.976 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.976 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:10:36.976 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:10:36.976 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.976 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.976 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.976 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:10:36.976 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:10:36.976 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.976 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.976 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.976 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:10:36.976 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:10:36.976 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.976 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.976 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.976 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:10:36.976 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:10:36.976 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.976 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.976 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:36.976 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:10:36.976 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:10:36.976 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.976 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.976 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:36.976 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:10:36.976 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:10:36.976 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.976 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.976 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:36.976 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:36.976 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:36.976 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.976 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.976 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:36.976 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:36.976 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:36.976 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.976 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.976 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:36.976 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:36.976 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:36.976 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.976 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.976 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:36.976 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:36.976 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:36.976 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.976 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.976 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:36.976 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:36.976 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:36.976 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.976 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.976 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:36.976 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:36.976 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:36.976 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.976 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.976 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:36.976 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:36.976 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:36.976 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.976 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.976 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:36.976 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:36.976 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:36.976 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.976 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.976 04:27:33 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:10:36.976 04:27:33 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:10:36.976 04:27:33 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:10:36.976 04:27:33 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:10:36.976 04:27:33 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:10:36.976 04:27:33 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:36.976 04:27:33 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:10:36.976 04:27:33 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:10:36.976 04:27:33 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:10:36.976 04:27:33 nvme_fdp -- scripts/common.sh@18 -- # local i 00:10:36.976 04:27:33 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:10:36.976 04:27:33 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:36.976 04:27:33 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:10:36.976 04:27:33 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:10:36.976 04:27:33 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:10:36.976 04:27:33 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:10:36.976 04:27:33 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:36.976 04:27:33 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:10:36.976 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.976 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.976 04:27:33 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:10:36.976 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:36.976 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.976 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.976 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:36.976 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:10:36.976 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:10:36.976 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.976 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.976 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:36.976 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:10:36.976 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:10:36.976 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.976 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.976 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:10:36.976 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:10:36.976 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:10:36.976 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.976 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.976 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:36.976 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:10:36.976 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:10:36.976 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.976 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.976 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:36.976 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:10:36.976 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:10:36.976 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.976 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.976 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:36.976 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:10:36.976 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:10:36.977 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.977 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.977 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:36.977 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:10:36.977 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:10:36.977 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.977 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.977 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.977 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:10:36.977 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:10:36.977 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.977 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.977 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:36.977 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:10:36.977 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:10:36.977 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.977 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.977 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.977 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:10:36.977 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:10:36.977 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.977 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.977 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:36.977 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:10:36.977 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:10:36.977 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.977 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.977 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.977 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:10:36.977 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:10:36.977 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.977 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.977 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.977 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:10:36.977 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:10:36.977 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.977 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.977 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:36.977 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:10:36.977 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:10:36.977 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.977 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.977 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:10:36.977 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:10:36.977 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:10:36.977 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.977 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.977 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.977 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:10:36.977 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:10:36.977 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.977 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.977 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:36.977 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:10:36.977 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:10:36.977 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.977 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.977 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:36.977 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:36.977 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:10:36.977 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.977 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.977 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.977 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:10:36.977 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:10:36.977 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.977 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.977 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.977 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:10:36.977 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:10:36.977 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.977 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.977 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.977 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:10:36.977 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:10:36.977 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.977 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.977 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.977 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:10:36.977 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:10:36.977 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.977 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.977 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.977 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:10:36.977 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:10:36.977 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.977 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.977 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.977 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:10:36.977 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:10:36.977 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.977 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.977 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:36.977 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:10:36.977 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:10:36.977 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.977 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.977 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:36.977 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:10:36.977 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:10:36.977 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.977 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.977 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:36.977 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:10:36.977 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:10:36.977 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.977 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.977 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:36.977 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:10:36.977 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:10:36.977 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.977 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.977 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:36.977 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:10:36.977 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:10:36.977 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.977 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.977 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.977 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:10:36.977 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:10:36.977 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.977 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.977 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.977 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:10:36.977 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:10:36.977 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.977 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.977 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.977 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:10:36.977 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:10:36.977 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.977 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.977 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.977 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:10:36.977 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:10:36.977 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.977 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.977 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:36.977 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:10:36.977 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:10:36.977 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.977 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.977 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:36.977 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:10:36.977 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:10:36.977 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.977 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.977 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.977 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:10:36.977 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:10:36.977 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.977 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.977 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.977 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:10:36.978 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:10:36.978 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.978 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.978 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.978 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:10:36.978 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:10:36.978 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.978 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.978 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.978 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:10:36.978 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:10:36.978 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.978 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.978 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.978 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:10:36.978 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:10:36.978 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.978 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.978 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.978 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:10:36.978 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:10:36.978 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.978 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.978 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.978 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:10:36.978 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:10:36.978 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.978 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.978 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.978 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:10:36.978 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:10:36.978 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.978 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.978 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.978 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:10:36.978 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:10:36.978 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.978 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.978 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.978 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:10:36.978 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:10:36.978 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.978 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.978 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.978 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:10:36.978 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:10:36.978 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.978 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.978 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.978 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:10:36.978 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:10:36.978 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.978 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.978 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.978 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:10:36.978 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:10:36.978 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.978 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.978 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.978 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:10:36.978 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:10:36.978 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.978 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.978 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.978 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:10:36.978 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:10:36.978 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.978 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.978 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.978 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:10:36.978 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:10:36.978 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.978 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.978 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.978 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:10:36.978 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:10:36.978 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.978 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.978 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.978 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:10:36.978 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:10:36.978 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.978 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.978 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.978 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:10:36.978 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:10:36.978 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.978 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.978 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.978 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:10:36.978 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:10:36.978 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.978 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.978 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.978 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:10:36.978 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:10:36.978 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.978 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.978 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.978 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:10:36.978 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:10:36.978 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.978 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.978 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.978 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:10:36.978 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:10:36.978 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.978 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.978 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.978 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:10:36.978 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:10:36.978 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.978 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.978 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.978 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:10:36.978 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:10:36.978 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.978 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.978 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:36.978 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:10:36.978 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:10:36.978 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.978 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.978 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:36.978 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:10:36.978 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:10:36.978 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.978 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.978 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.978 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:10:36.978 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:10:36.978 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.978 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.978 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:36.978 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:10:36.978 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:10:36.978 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.978 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.978 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:36.978 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:10:36.978 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:10:36.978 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.978 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.978 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.978 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:10:36.978 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:10:36.978 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.978 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.978 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.978 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:10:36.978 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:10:36.978 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.979 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.979 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:36.979 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:10:36.979 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:10:36.979 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.979 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.979 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.979 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:10:36.979 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:10:36.979 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.979 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.979 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.979 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:10:36.979 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:10:36.979 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.979 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.979 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.979 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:10:36.979 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:10:36.979 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.979 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.979 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.979 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:10:36.979 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:10:36.979 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.979 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.979 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.979 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:10:36.979 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:10:36.979 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.979 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.979 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:36.979 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:10:36.979 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:10:36.979 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.979 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.979 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:36.979 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:10:36.979 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:10:36.979 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.979 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.979 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.979 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:10:36.979 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:10:36.979 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.979 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.979 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.979 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:10:36.979 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:10:36.979 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.979 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.979 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.979 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:10:36.979 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:10:36.979 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.979 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.979 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:10:36.979 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:10:36.979 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:10:36.979 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.979 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.979 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.979 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:10:36.979 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:10:36.979 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.979 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.979 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.979 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:10:36.979 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:10:36.979 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.979 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.979 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.979 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:10:36.979 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:10:36.979 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.979 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.979 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.979 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:10:36.979 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:10:36.979 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.979 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.979 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.979 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:10:36.979 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:10:36.979 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.979 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.979 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.979 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:10:36.979 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:10:36.979 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.979 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.979 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:36.979 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:36.979 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:36.979 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.979 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.979 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:36.979 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:36.979 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:36.979 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.979 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.979 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:36.979 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:10:36.979 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:10:36.979 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.979 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.979 04:27:33 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:10:36.979 04:27:33 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:36.979 04:27:33 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/ng1n1 ]] 00:10:36.979 04:27:33 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng1n1 00:10:36.979 04:27:33 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng1n1 id-ns /dev/ng1n1 00:10:36.979 04:27:33 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng1n1 reg val 00:10:36.979 04:27:33 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:36.979 04:27:33 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng1n1=()' 00:10:36.979 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.979 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.979 04:27:33 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng1n1 00:10:36.979 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:36.979 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.979 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.979 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:36.979 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsze]="0x17a17a"' 00:10:36.979 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsze]=0x17a17a 00:10:36.979 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.979 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.979 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:36.979 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[ncap]="0x17a17a"' 00:10:36.979 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[ncap]=0x17a17a 00:10:36.979 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.980 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.980 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:36.980 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nuse]="0x17a17a"' 00:10:36.980 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nuse]=0x17a17a 00:10:36.980 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.980 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.980 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:36.980 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsfeat]="0x14"' 00:10:36.980 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsfeat]=0x14 00:10:36.980 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.980 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.980 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:36.980 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nlbaf]="7"' 00:10:36.980 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nlbaf]=7 00:10:36.980 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.980 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.980 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:36.980 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[flbas]="0x7"' 00:10:36.980 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[flbas]=0x7 00:10:36.980 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.980 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.980 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:36.980 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mc]="0x3"' 00:10:36.980 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mc]=0x3 00:10:36.980 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.980 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.980 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:36.980 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dpc]="0x1f"' 00:10:36.980 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dpc]=0x1f 00:10:36.980 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.980 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.980 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.980 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dps]="0"' 00:10:36.980 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dps]=0 00:10:36.980 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.980 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.980 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.980 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nmic]="0"' 00:10:36.980 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nmic]=0 00:10:36.980 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.980 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.980 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.980 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[rescap]="0"' 00:10:36.980 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[rescap]=0 00:10:36.980 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.980 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.980 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.980 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[fpi]="0"' 00:10:36.980 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[fpi]=0 00:10:36.980 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.980 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.980 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:36.980 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dlfeat]="1"' 00:10:36.980 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dlfeat]=1 00:10:36.980 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.980 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.980 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.980 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nawun]="0"' 00:10:36.980 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nawun]=0 00:10:36.980 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.980 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.980 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.980 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nawupf]="0"' 00:10:36.980 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nawupf]=0 00:10:36.980 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.980 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.980 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.980 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nacwu]="0"' 00:10:36.980 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nacwu]=0 00:10:36.980 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.980 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.980 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.980 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabsn]="0"' 00:10:36.980 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabsn]=0 00:10:36.980 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.980 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.980 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.980 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabo]="0"' 00:10:36.980 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabo]=0 00:10:36.980 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.980 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.980 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.980 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabspf]="0"' 00:10:36.980 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabspf]=0 00:10:36.980 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.980 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.980 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.980 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[noiob]="0"' 00:10:36.980 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[noiob]=0 00:10:36.980 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.980 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.980 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.980 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmcap]="0"' 00:10:36.980 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nvmcap]=0 00:10:36.980 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.980 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.980 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.980 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npwg]="0"' 00:10:36.980 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npwg]=0 00:10:36.980 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.980 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.980 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.980 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npwa]="0"' 00:10:36.980 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npwa]=0 00:10:36.980 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.980 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.980 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.980 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npdg]="0"' 00:10:36.980 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npdg]=0 00:10:36.980 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.980 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.980 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.980 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npda]="0"' 00:10:36.980 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npda]=0 00:10:36.980 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.980 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.980 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.980 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nows]="0"' 00:10:36.980 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nows]=0 00:10:36.980 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.980 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.980 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:36.980 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mssrl]="128"' 00:10:36.980 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mssrl]=128 00:10:36.980 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.980 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.980 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:36.980 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mcl]="128"' 00:10:36.980 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mcl]=128 00:10:36.980 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.980 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.980 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:36.980 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[msrc]="127"' 00:10:36.980 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[msrc]=127 00:10:36.980 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.980 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.980 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.980 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nulbaf]="0"' 00:10:36.980 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nulbaf]=0 00:10:36.980 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.980 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.980 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.980 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[anagrpid]="0"' 00:10:36.980 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[anagrpid]=0 00:10:36.980 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.980 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.980 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.980 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsattr]="0"' 00:10:36.980 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsattr]=0 00:10:36.980 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.980 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.981 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.981 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmsetid]="0"' 00:10:36.981 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nvmsetid]=0 00:10:36.981 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.981 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.981 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.981 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[endgid]="0"' 00:10:36.981 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[endgid]=0 00:10:36.981 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.981 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.981 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:36.981 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nguid]="00000000000000000000000000000000"' 00:10:36.981 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nguid]=00000000000000000000000000000000 00:10:36.981 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.981 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.981 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:36.981 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[eui64]="0000000000000000"' 00:10:36.981 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[eui64]=0000000000000000 00:10:36.981 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.981 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.981 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:36.981 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:36.981 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:36.981 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.981 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.981 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:36.981 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:36.981 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:36.981 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.981 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.981 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:36.981 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:36.981 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:36.981 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.981 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.981 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:36.981 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:36.981 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:36.981 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.981 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.981 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:10:36.981 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:10:36.981 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:10:36.981 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.981 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.981 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:36.981 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:36.981 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:36.981 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.981 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.981 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:36.981 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:36.981 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:36.981 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.981 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.981 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:10:36.981 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:10:36.981 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:10:36.981 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.981 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.981 04:27:33 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng1n1 00:10:36.981 04:27:33 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:36.981 04:27:33 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:10:36.981 04:27:33 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:10:36.981 04:27:33 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:10:36.981 04:27:33 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:10:36.981 04:27:33 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:36.981 04:27:33 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:10:36.981 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.981 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.981 04:27:33 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:10:36.981 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:36.981 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.981 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.981 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:36.981 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:10:36.981 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:10:36.981 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.981 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.981 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:36.981 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:10:36.981 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:10:36.981 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.981 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.981 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:36.981 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:10:36.981 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:10:36.981 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.981 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.981 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:36.981 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:10:36.981 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:10:36.981 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.981 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.981 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:36.981 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:10:36.981 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:10:36.981 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.981 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.981 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:36.981 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:10:36.981 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:10:36.981 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.981 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.981 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:36.981 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:10:36.981 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:10:36.981 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.981 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.981 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:36.981 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:10:36.981 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:10:36.981 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.981 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.981 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.981 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:10:36.981 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:10:36.981 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.981 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.981 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.981 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:10:36.981 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:10:36.981 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.981 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.981 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.981 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:10:36.981 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:10:36.981 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.981 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.981 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.981 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:10:36.981 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:10:36.981 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.981 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.981 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:36.981 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:10:36.981 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:10:36.981 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.981 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.982 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.982 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:10:36.982 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:10:36.982 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.982 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.982 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.982 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:10:36.982 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:10:36.982 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.982 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.982 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.982 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:10:36.982 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:10:36.982 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.982 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.982 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.982 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:10:36.982 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:10:36.982 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.982 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.982 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.982 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:10:36.982 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:10:36.982 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.982 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.982 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.982 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:10:36.982 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:10:36.982 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.982 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.982 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.982 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:10:36.982 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:10:36.982 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.982 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.982 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.982 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:10:36.982 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:10:36.982 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.982 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.982 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.982 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:10:36.982 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:10:36.982 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.982 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.982 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.982 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:10:36.982 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:10:36.982 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.982 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.982 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.982 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:10:36.982 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:10:36.982 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.982 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.982 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.982 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:10:36.982 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:10:36.982 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.982 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.982 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.982 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:10:36.982 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:10:36.982 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.982 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.982 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:36.982 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:10:36.982 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:10:36.982 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.982 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.982 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:36.982 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:10:36.982 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:10:36.982 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.982 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.982 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:36.982 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:10:36.982 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:10:36.982 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.982 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.982 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.982 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:10:36.982 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:10:36.982 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.982 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.982 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.982 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:10:36.982 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:10:36.982 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.982 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.982 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.982 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:10:36.982 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:10:36.982 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.982 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.982 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.982 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:10:36.982 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:10:36.982 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.982 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.982 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.982 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:10:36.982 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:10:36.982 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.982 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.982 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:36.982 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:10:36.982 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:10:36.982 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.982 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.982 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:36.982 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:10:36.982 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:10:36.982 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.982 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.982 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:36.982 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:36.982 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:36.982 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.982 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.982 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:36.982 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:36.982 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:36.982 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.982 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.982 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:36.982 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:36.982 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:36.982 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.982 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.982 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:36.982 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:36.982 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:36.982 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.982 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.982 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:10:36.982 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:10:36.982 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:10:36.982 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.983 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.983 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:36.983 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:36.983 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:36.983 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.983 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.983 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:36.983 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:36.983 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:36.983 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.983 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.983 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:10:36.983 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:10:36.983 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:10:36.983 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.983 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.983 04:27:33 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:10:36.983 04:27:33 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:10:36.983 04:27:33 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:10:36.983 04:27:33 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:10:36.983 04:27:33 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:10:36.983 04:27:33 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:36.983 04:27:33 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:10:36.983 04:27:33 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:10:36.983 04:27:33 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:10:36.983 04:27:33 nvme_fdp -- scripts/common.sh@18 -- # local i 00:10:36.983 04:27:33 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:10:36.983 04:27:33 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:36.983 04:27:33 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:10:36.983 04:27:33 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:10:36.983 04:27:33 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:10:36.983 04:27:33 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:10:36.983 04:27:33 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:36.983 04:27:33 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:10:36.983 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.983 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.983 04:27:33 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:10:36.983 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:36.983 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.983 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.983 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:36.983 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:10:36.983 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:10:36.983 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.983 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.983 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:36.983 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:10:36.983 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:10:36.983 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.983 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.983 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:10:36.983 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:10:36.983 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:10:36.983 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.983 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.983 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:36.983 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:10:36.983 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:10:36.983 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.983 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.983 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:36.983 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:10:36.983 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:10:36.983 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.983 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.983 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:36.983 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:10:36.983 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:10:36.983 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.983 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.983 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:36.983 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:10:36.983 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:10:36.983 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.983 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.983 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.983 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:10:36.983 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:10:36.983 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.983 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.983 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:36.983 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:10:36.983 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:10:36.983 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.983 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.983 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.983 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:10:36.983 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:10:36.983 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.983 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.983 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:36.983 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:10:36.983 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:10:36.983 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.983 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.983 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.983 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:10:36.983 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:10:36.983 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.983 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.983 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.983 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:10:36.983 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:10:36.983 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.983 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.983 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:36.983 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:10:36.983 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:10:36.983 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.983 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.983 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:10:36.983 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:10:36.983 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:10:36.983 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.983 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.983 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.983 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:10:36.983 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:10:36.983 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.983 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.983 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:36.983 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:10:36.983 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:10:36.983 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.983 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.983 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:36.983 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:36.983 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:10:36.983 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.983 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.983 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.983 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:10:36.983 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:10:36.983 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.983 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.984 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.984 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:10:36.984 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:10:36.984 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.984 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.984 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.984 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:10:36.984 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:10:36.984 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.984 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.984 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.984 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:10:36.984 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:10:36.984 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.984 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.984 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.984 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:10:36.984 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:10:36.984 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.984 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.984 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.984 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:10:36.984 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:10:36.984 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.984 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.984 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:36.984 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:10:36.984 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:10:36.984 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.984 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.984 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:36.984 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:10:36.984 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:10:36.984 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.984 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.984 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:36.984 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:10:36.984 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:10:36.984 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.984 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.984 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:36.984 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:10:36.984 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:10:36.984 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.984 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.984 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:36.984 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:10:36.984 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:10:36.984 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.984 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.984 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.984 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:10:36.984 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:10:36.984 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.984 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.984 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.984 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:10:36.984 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:10:36.984 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.984 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.984 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.984 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:10:36.984 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:10:36.984 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.984 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.984 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.984 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:10:36.984 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:10:36.984 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.984 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.984 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:36.984 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:10:36.984 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:10:36.984 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.984 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.984 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:36.984 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:10:36.984 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:10:36.984 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.984 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.984 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.984 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:10:36.984 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:10:36.984 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.984 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.984 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.984 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:10:36.984 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:10:36.984 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.984 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.984 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.984 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:10:36.984 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:10:36.984 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.984 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.984 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.984 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:10:36.984 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:10:36.984 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.984 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.984 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.984 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:10:36.984 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:10:36.984 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.984 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.984 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.984 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:10:36.984 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:10:36.984 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.984 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.984 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.984 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:10:36.984 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:10:36.984 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.984 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.984 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.984 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:10:36.984 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:10:36.984 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.984 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.984 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.984 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:10:36.984 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:10:36.984 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.984 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.984 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.984 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:10:36.984 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:10:36.984 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.984 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.984 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.984 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:10:36.984 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:10:36.984 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.984 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.984 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.984 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:10:36.985 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:10:36.985 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.985 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.985 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.985 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:10:36.985 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:10:36.985 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.985 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.985 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.985 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:10:36.985 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:10:36.985 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.985 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.985 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.985 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:10:36.985 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:10:36.985 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.985 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.985 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.985 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:10:36.985 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:10:36.985 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.985 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.985 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.985 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:10:36.985 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:10:36.985 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.985 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.985 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.985 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:10:36.985 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:10:36.985 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.985 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.985 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.985 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:10:36.985 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:10:36.985 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.985 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.985 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.985 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:10:36.985 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:10:36.985 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.985 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.985 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.985 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:10:36.985 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:10:36.985 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.985 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.985 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.985 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:10:36.985 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:10:36.985 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.985 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.985 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.985 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:10:36.985 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:10:36.985 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.985 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.985 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.985 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:10:36.985 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:10:36.985 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.985 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.985 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.985 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:10:36.985 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:10:36.985 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.985 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.985 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:36.985 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:10:36.985 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:10:36.985 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.985 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.985 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:36.985 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:10:36.985 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:10:36.985 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.985 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.985 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.985 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:10:36.985 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:10:36.985 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.985 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.985 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:36.985 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:10:36.985 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:10:36.985 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.985 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.985 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:36.985 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:10:36.985 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:10:36.985 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.985 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.985 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.985 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:10:36.985 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:10:36.985 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.985 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.985 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.985 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:10:36.985 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:10:36.985 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.985 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.985 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:36.985 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:10:36.985 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:10:36.985 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.985 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.985 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.985 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:10:36.985 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:10:36.985 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.985 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.985 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.985 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:10:36.985 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:10:36.985 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.985 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.985 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.985 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:10:36.985 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:10:36.985 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.985 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.985 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.985 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:10:36.985 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:10:36.985 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.985 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.985 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.985 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:10:36.985 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:10:36.985 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.985 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.985 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:36.985 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:10:36.985 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:10:36.986 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.986 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.986 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:36.986 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:10:36.986 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:10:36.986 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.986 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.986 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.986 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:10:36.986 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:10:36.986 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.986 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.986 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.986 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:10:36.986 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:10:36.986 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.986 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.986 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.986 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:10:36.986 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:10:36.986 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.986 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.986 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:10:36.986 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:10:36.986 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:10:36.986 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.986 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.986 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.986 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:10:36.986 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:10:36.986 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.986 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.986 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.986 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:10:36.986 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:10:36.986 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.986 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.986 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.986 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:10:36.986 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:10:36.986 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.986 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.986 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.986 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:10:36.986 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:10:36.986 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.986 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.986 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.986 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:10:36.986 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:10:36.986 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.986 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.986 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.986 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:10:36.986 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:10:36.986 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.986 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.986 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:36.986 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:36.986 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:36.986 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.986 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.986 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:36.986 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:36.986 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:36.986 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.986 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.986 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:36.986 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:10:36.986 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:10:36.986 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.986 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.986 04:27:33 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:10:36.986 04:27:33 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:36.986 04:27:33 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n1 ]] 00:10:36.986 04:27:33 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n1 00:10:36.986 04:27:33 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n1 id-ns /dev/ng2n1 00:10:36.986 04:27:33 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n1 reg val 00:10:36.986 04:27:33 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:36.986 04:27:33 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n1=()' 00:10:36.986 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.986 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.986 04:27:33 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n1 00:10:36.986 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:36.986 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.986 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.986 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:36.986 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsze]="0x100000"' 00:10:36.986 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsze]=0x100000 00:10:36.986 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.986 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.986 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:36.986 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[ncap]="0x100000"' 00:10:36.986 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[ncap]=0x100000 00:10:36.986 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.986 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.986 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:36.986 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nuse]="0x100000"' 00:10:36.986 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nuse]=0x100000 00:10:36.986 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.986 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.986 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:36.986 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsfeat]="0x14"' 00:10:36.986 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsfeat]=0x14 00:10:36.986 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.986 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.986 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:36.986 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nlbaf]="7"' 00:10:36.986 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nlbaf]=7 00:10:36.986 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.986 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.986 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:36.986 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[flbas]="0x4"' 00:10:36.986 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[flbas]=0x4 00:10:36.986 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.986 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.986 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:36.986 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mc]="0x3"' 00:10:36.986 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mc]=0x3 00:10:36.986 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.986 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.986 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:36.986 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dpc]="0x1f"' 00:10:36.986 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dpc]=0x1f 00:10:36.986 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.986 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.986 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.986 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dps]="0"' 00:10:36.986 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dps]=0 00:10:36.986 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.986 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.986 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.986 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nmic]="0"' 00:10:36.986 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nmic]=0 00:10:36.986 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.986 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.986 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.986 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[rescap]="0"' 00:10:36.986 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[rescap]=0 00:10:36.986 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.986 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.986 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.986 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[fpi]="0"' 00:10:36.986 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[fpi]=0 00:10:36.986 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.987 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.987 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:36.987 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dlfeat]="1"' 00:10:36.987 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dlfeat]=1 00:10:36.987 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.987 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.987 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.987 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nawun]="0"' 00:10:36.987 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nawun]=0 00:10:36.987 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.987 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.987 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.987 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nawupf]="0"' 00:10:36.987 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nawupf]=0 00:10:36.987 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.987 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.987 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.987 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nacwu]="0"' 00:10:36.987 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nacwu]=0 00:10:36.987 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.987 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.987 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.987 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabsn]="0"' 00:10:36.987 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabsn]=0 00:10:36.987 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.987 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.987 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.987 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabo]="0"' 00:10:36.987 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabo]=0 00:10:36.987 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.987 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.987 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.987 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabspf]="0"' 00:10:36.987 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabspf]=0 00:10:36.987 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.987 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.987 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.987 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[noiob]="0"' 00:10:36.987 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[noiob]=0 00:10:36.987 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.987 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.987 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.987 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmcap]="0"' 00:10:36.987 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nvmcap]=0 00:10:36.987 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.987 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.987 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.987 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npwg]="0"' 00:10:36.987 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npwg]=0 00:10:36.987 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.987 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.987 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.987 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npwa]="0"' 00:10:36.987 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npwa]=0 00:10:36.987 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.987 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.987 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.987 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npdg]="0"' 00:10:36.987 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npdg]=0 00:10:36.987 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.987 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.987 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.987 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npda]="0"' 00:10:36.987 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npda]=0 00:10:36.987 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.987 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.987 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.987 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nows]="0"' 00:10:36.987 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nows]=0 00:10:36.987 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.987 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.987 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:36.987 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mssrl]="128"' 00:10:36.987 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mssrl]=128 00:10:36.987 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.987 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.987 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:36.987 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mcl]="128"' 00:10:36.987 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mcl]=128 00:10:36.987 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.987 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.987 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:36.987 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[msrc]="127"' 00:10:36.987 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[msrc]=127 00:10:36.987 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.987 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.987 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.987 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nulbaf]="0"' 00:10:36.987 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nulbaf]=0 00:10:36.987 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.987 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.987 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.987 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[anagrpid]="0"' 00:10:36.987 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[anagrpid]=0 00:10:36.987 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.987 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.987 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.987 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsattr]="0"' 00:10:36.987 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsattr]=0 00:10:36.987 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.987 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.987 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.987 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmsetid]="0"' 00:10:36.987 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nvmsetid]=0 00:10:36.987 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.987 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.987 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.987 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[endgid]="0"' 00:10:36.987 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[endgid]=0 00:10:36.987 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.987 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.987 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:36.987 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nguid]="00000000000000000000000000000000"' 00:10:36.987 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nguid]=00000000000000000000000000000000 00:10:36.987 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.987 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.987 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:36.987 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[eui64]="0000000000000000"' 00:10:36.987 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[eui64]=0000000000000000 00:10:36.987 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.987 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.987 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:36.987 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:36.987 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:36.987 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.987 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.987 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:36.987 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:36.987 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:36.987 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.987 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.987 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:36.987 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:36.987 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:36.987 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.987 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.987 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:36.987 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:36.987 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:36.987 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.987 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.987 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:36.987 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:36.987 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:36.987 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.988 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.988 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:36.988 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:36.988 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:36.988 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.988 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.988 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:36.988 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:36.988 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:36.988 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.988 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.988 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:36.988 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:36.988 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:36.988 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.988 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.988 04:27:33 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n1 00:10:36.988 04:27:33 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:36.988 04:27:33 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n2 ]] 00:10:36.988 04:27:33 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n2 00:10:36.988 04:27:33 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n2 id-ns /dev/ng2n2 00:10:36.988 04:27:33 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n2 reg val 00:10:36.988 04:27:33 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:36.988 04:27:33 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n2=()' 00:10:36.988 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.988 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.988 04:27:33 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n2 00:10:36.988 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:36.988 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.988 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.988 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:36.988 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsze]="0x100000"' 00:10:36.988 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsze]=0x100000 00:10:36.988 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.988 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.988 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:36.988 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[ncap]="0x100000"' 00:10:36.988 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[ncap]=0x100000 00:10:36.988 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.988 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.988 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:36.988 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nuse]="0x100000"' 00:10:36.988 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nuse]=0x100000 00:10:36.988 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.988 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.988 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:36.988 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsfeat]="0x14"' 00:10:36.988 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsfeat]=0x14 00:10:36.988 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.988 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.988 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:36.988 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nlbaf]="7"' 00:10:36.988 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nlbaf]=7 00:10:36.988 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.988 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.988 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:36.988 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[flbas]="0x4"' 00:10:36.988 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[flbas]=0x4 00:10:36.988 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.988 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.988 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:36.988 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mc]="0x3"' 00:10:36.988 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mc]=0x3 00:10:36.988 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.988 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.988 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:36.988 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dpc]="0x1f"' 00:10:36.988 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dpc]=0x1f 00:10:36.988 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.988 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.988 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.988 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dps]="0"' 00:10:36.988 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dps]=0 00:10:36.988 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.988 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.988 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.988 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nmic]="0"' 00:10:36.988 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nmic]=0 00:10:36.988 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.988 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.988 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.988 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[rescap]="0"' 00:10:36.988 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[rescap]=0 00:10:36.988 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.988 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.988 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.988 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[fpi]="0"' 00:10:36.988 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[fpi]=0 00:10:36.988 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.988 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.988 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:36.988 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dlfeat]="1"' 00:10:36.988 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dlfeat]=1 00:10:36.988 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.988 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.988 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.988 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nawun]="0"' 00:10:36.988 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nawun]=0 00:10:36.988 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.988 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.988 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.988 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nawupf]="0"' 00:10:36.988 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nawupf]=0 00:10:36.988 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.988 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.988 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.988 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nacwu]="0"' 00:10:36.988 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nacwu]=0 00:10:36.988 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.988 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.988 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.988 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabsn]="0"' 00:10:36.988 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabsn]=0 00:10:36.988 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.988 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.988 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.988 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabo]="0"' 00:10:36.988 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabo]=0 00:10:36.988 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.988 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.988 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.988 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabspf]="0"' 00:10:36.988 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabspf]=0 00:10:36.988 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.988 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.988 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.988 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[noiob]="0"' 00:10:36.988 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[noiob]=0 00:10:36.988 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.988 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.988 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.988 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmcap]="0"' 00:10:36.988 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nvmcap]=0 00:10:36.988 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.988 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.988 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.988 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npwg]="0"' 00:10:36.988 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npwg]=0 00:10:36.988 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.988 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.988 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.988 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npwa]="0"' 00:10:36.989 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npwa]=0 00:10:36.989 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.989 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.989 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.989 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npdg]="0"' 00:10:36.989 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npdg]=0 00:10:36.989 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.989 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.989 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.989 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npda]="0"' 00:10:36.989 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npda]=0 00:10:36.989 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.989 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.989 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.989 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nows]="0"' 00:10:36.989 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nows]=0 00:10:36.989 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.989 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.989 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:36.989 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mssrl]="128"' 00:10:36.989 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mssrl]=128 00:10:36.989 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.989 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.989 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:36.989 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mcl]="128"' 00:10:36.989 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mcl]=128 00:10:36.989 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.989 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.989 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:36.989 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[msrc]="127"' 00:10:36.989 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[msrc]=127 00:10:36.989 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.989 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.989 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.989 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nulbaf]="0"' 00:10:36.989 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nulbaf]=0 00:10:36.989 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.989 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.989 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.989 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[anagrpid]="0"' 00:10:36.989 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[anagrpid]=0 00:10:36.989 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.989 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.989 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.989 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsattr]="0"' 00:10:36.989 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsattr]=0 00:10:36.989 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.989 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.989 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.989 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmsetid]="0"' 00:10:36.989 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nvmsetid]=0 00:10:36.989 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.989 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.989 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.989 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[endgid]="0"' 00:10:36.989 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[endgid]=0 00:10:36.989 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.989 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.989 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:36.989 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nguid]="00000000000000000000000000000000"' 00:10:36.989 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nguid]=00000000000000000000000000000000 00:10:36.989 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.989 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.989 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:36.989 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[eui64]="0000000000000000"' 00:10:36.989 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[eui64]=0000000000000000 00:10:36.989 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.989 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.989 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:36.989 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:36.989 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:36.989 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.989 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.989 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:36.989 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:36.989 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:36.989 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.989 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.989 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:36.989 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:36.989 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:36.989 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.989 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.989 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:36.989 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:36.989 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:36.989 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.989 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.989 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:36.989 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:36.989 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:36.989 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.989 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.989 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:36.989 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:36.989 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:36.989 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.989 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.989 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:36.989 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:36.989 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:36.989 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.989 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.989 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:36.989 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:36.989 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:36.989 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.989 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.989 04:27:33 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n2 00:10:36.989 04:27:33 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:36.989 04:27:33 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n3 ]] 00:10:36.989 04:27:33 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n3 00:10:36.989 04:27:33 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n3 id-ns /dev/ng2n3 00:10:36.989 04:27:33 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n3 reg val 00:10:36.989 04:27:33 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:36.989 04:27:33 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n3=()' 00:10:36.989 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.989 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.989 04:27:33 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n3 00:10:36.989 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:36.989 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.989 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.989 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:36.989 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsze]="0x100000"' 00:10:36.989 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsze]=0x100000 00:10:36.989 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.989 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.989 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:36.989 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[ncap]="0x100000"' 00:10:36.989 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[ncap]=0x100000 00:10:36.989 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.989 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.989 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:36.989 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nuse]="0x100000"' 00:10:36.989 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nuse]=0x100000 00:10:36.990 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.990 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.990 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:36.990 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsfeat]="0x14"' 00:10:36.990 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsfeat]=0x14 00:10:36.990 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.990 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.990 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:36.990 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nlbaf]="7"' 00:10:36.990 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nlbaf]=7 00:10:36.990 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.990 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.990 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:36.990 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[flbas]="0x4"' 00:10:36.990 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[flbas]=0x4 00:10:36.990 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.990 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.990 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:36.990 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mc]="0x3"' 00:10:36.990 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mc]=0x3 00:10:36.990 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.990 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.990 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:36.990 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dpc]="0x1f"' 00:10:36.990 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dpc]=0x1f 00:10:36.990 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.990 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.990 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.990 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dps]="0"' 00:10:36.990 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dps]=0 00:10:36.990 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.990 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.990 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.990 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nmic]="0"' 00:10:36.990 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nmic]=0 00:10:36.990 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.990 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.990 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.990 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[rescap]="0"' 00:10:36.990 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[rescap]=0 00:10:36.990 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.990 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.990 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.990 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[fpi]="0"' 00:10:36.990 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[fpi]=0 00:10:36.990 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.990 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.990 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:36.990 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dlfeat]="1"' 00:10:36.990 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dlfeat]=1 00:10:36.990 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.990 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.990 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.990 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nawun]="0"' 00:10:36.990 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nawun]=0 00:10:36.990 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.990 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.990 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.990 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nawupf]="0"' 00:10:36.990 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nawupf]=0 00:10:36.990 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.990 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.990 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.990 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nacwu]="0"' 00:10:36.990 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nacwu]=0 00:10:36.990 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.990 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.990 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.990 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabsn]="0"' 00:10:36.990 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabsn]=0 00:10:36.990 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.990 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.990 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.990 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabo]="0"' 00:10:36.990 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabo]=0 00:10:36.990 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.990 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.990 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.990 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabspf]="0"' 00:10:36.990 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabspf]=0 00:10:36.990 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.990 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.990 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.990 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[noiob]="0"' 00:10:36.990 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[noiob]=0 00:10:36.990 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.990 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.990 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.990 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmcap]="0"' 00:10:36.990 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nvmcap]=0 00:10:36.990 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.990 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.990 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.990 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npwg]="0"' 00:10:36.990 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npwg]=0 00:10:36.990 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.990 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.990 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.990 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npwa]="0"' 00:10:36.990 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npwa]=0 00:10:36.990 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.990 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.990 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.990 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npdg]="0"' 00:10:36.990 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npdg]=0 00:10:36.990 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.990 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.990 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.990 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npda]="0"' 00:10:36.990 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npda]=0 00:10:36.990 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.990 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.990 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.990 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nows]="0"' 00:10:36.990 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nows]=0 00:10:36.990 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.990 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.990 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:36.990 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mssrl]="128"' 00:10:36.990 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mssrl]=128 00:10:36.990 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.990 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.990 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:36.990 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mcl]="128"' 00:10:36.990 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mcl]=128 00:10:36.990 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.990 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.990 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:36.990 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[msrc]="127"' 00:10:36.990 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[msrc]=127 00:10:36.990 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.990 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.990 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.990 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nulbaf]="0"' 00:10:36.990 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nulbaf]=0 00:10:36.990 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.990 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.990 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.990 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[anagrpid]="0"' 00:10:36.990 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[anagrpid]=0 00:10:36.990 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.990 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.990 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.990 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsattr]="0"' 00:10:36.990 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsattr]=0 00:10:36.990 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.990 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.990 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.990 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmsetid]="0"' 00:10:36.990 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nvmsetid]=0 00:10:36.991 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.991 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.991 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.991 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[endgid]="0"' 00:10:36.991 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[endgid]=0 00:10:36.991 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.991 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.991 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:36.991 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nguid]="00000000000000000000000000000000"' 00:10:36.991 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nguid]=00000000000000000000000000000000 00:10:36.991 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.991 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.991 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:36.991 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[eui64]="0000000000000000"' 00:10:36.991 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[eui64]=0000000000000000 00:10:36.991 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.991 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.991 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:36.991 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:36.991 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:36.991 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.991 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.991 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:36.991 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:36.991 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:36.991 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.991 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.991 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:36.991 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:36.991 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:36.991 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.991 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.991 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:36.991 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:36.991 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:36.991 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.991 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.991 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:36.991 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:36.991 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:36.991 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.991 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.991 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:36.991 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:36.991 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:36.991 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.991 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.991 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:36.991 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:36.991 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:36.991 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.991 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.991 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:36.991 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:36.991 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:36.991 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.991 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.991 04:27:33 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n3 00:10:36.991 04:27:33 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:36.991 04:27:33 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:10:36.991 04:27:33 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:10:36.991 04:27:33 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:10:36.991 04:27:33 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:10:36.991 04:27:33 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:36.991 04:27:33 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:10:36.991 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.991 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.991 04:27:33 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:10:36.991 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:36.991 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.991 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.991 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:36.991 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:10:36.991 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:10:36.991 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.991 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.991 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:36.991 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:10:36.991 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:10:36.991 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.991 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.991 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:36.991 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:10:36.991 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:10:36.991 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.991 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.991 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:36.991 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:10:36.991 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:10:36.991 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.991 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.991 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:36.991 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:10:36.991 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:10:36.991 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.991 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.991 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:36.991 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:10:36.991 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:10:36.991 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.991 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.991 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:36.991 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:10:36.991 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:10:36.991 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.991 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.991 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:36.991 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:10:36.991 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:10:36.991 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.991 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.991 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.991 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:10:37.254 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:10:37.254 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.254 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.254 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.254 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:10:37.254 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:10:37.254 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.254 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.254 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.254 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:10:37.254 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:10:37.254 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.254 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.254 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.254 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:10:37.254 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:10:37.254 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.254 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.254 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:37.254 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:10:37.254 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:10:37.254 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.254 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.254 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.255 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:10:37.255 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:10:37.255 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.255 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.255 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.255 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:10:37.255 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:10:37.255 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.255 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.255 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.255 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:10:37.255 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:10:37.255 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.255 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.255 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.255 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:10:37.255 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:10:37.255 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.255 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.255 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.255 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:10:37.255 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:10:37.255 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.255 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.255 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.255 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:10:37.255 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:10:37.255 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.255 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.255 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.255 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:10:37.255 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:10:37.255 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.255 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.255 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.255 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:10:37.255 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:10:37.255 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.255 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.255 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.255 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:10:37.255 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:10:37.255 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.255 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.255 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.255 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:10:37.255 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:10:37.255 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.255 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.255 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.255 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:10:37.255 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:10:37.255 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.255 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.255 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.255 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:10:37.255 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:10:37.255 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.255 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.255 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.255 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:10:37.255 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:10:37.255 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.255 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.255 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:37.255 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:10:37.255 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:10:37.255 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.255 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.255 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:37.255 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:10:37.255 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:10:37.255 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.255 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.255 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:37.255 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:10:37.255 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:10:37.255 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.255 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.255 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.255 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:10:37.255 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:10:37.255 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.255 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.255 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.255 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:10:37.255 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:10:37.255 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.255 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.255 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.255 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:10:37.255 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:10:37.255 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.255 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.255 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.255 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:10:37.255 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:10:37.255 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.255 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.255 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.255 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:10:37.255 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:10:37.255 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.255 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.255 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:37.255 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:10:37.255 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:10:37.255 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.255 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.255 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:37.255 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:10:37.255 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:10:37.255 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.255 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.255 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:37.255 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:37.255 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:37.255 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.255 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.255 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:37.255 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:37.255 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:37.255 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.255 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.255 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:37.255 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:37.255 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:37.255 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.255 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.255 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:37.255 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:37.255 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:37.255 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.255 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.255 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:37.255 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:37.255 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:37.255 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.255 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.255 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:37.255 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:37.255 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:37.255 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.255 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.255 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:37.255 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:37.255 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:37.255 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.255 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.255 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:37.255 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:37.255 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:37.255 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.255 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.255 04:27:33 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:10:37.255 04:27:33 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:37.255 04:27:33 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:10:37.255 04:27:33 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:10:37.255 04:27:33 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:10:37.255 04:27:33 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:10:37.255 04:27:33 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:37.255 04:27:33 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:10:37.255 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.255 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.255 04:27:33 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:10:37.255 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:37.255 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.255 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.255 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:37.255 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:10:37.255 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:10:37.255 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.255 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.255 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:37.255 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:10:37.255 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:10:37.255 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.255 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.255 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:37.255 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:10:37.255 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:10:37.255 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.255 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.255 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:37.255 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:10:37.255 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:10:37.255 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.255 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.255 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:37.255 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:10:37.255 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:10:37.255 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.255 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.255 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:37.255 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:10:37.255 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:10:37.255 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.255 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.255 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:37.255 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:10:37.255 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:10:37.255 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.255 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.255 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:37.255 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:10:37.255 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:10:37.255 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.255 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.255 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.255 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:10:37.255 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:10:37.255 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.255 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.255 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.255 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:10:37.255 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:10:37.255 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.255 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.255 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.255 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:10:37.255 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:10:37.255 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.255 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.255 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.255 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:10:37.255 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:10:37.255 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.255 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.255 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:37.255 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:10:37.255 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:10:37.255 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.255 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.255 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.255 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:10:37.255 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:10:37.255 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.255 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.255 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.255 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:10:37.255 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:10:37.255 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.255 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.255 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.255 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:10:37.255 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:10:37.255 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.256 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.256 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.256 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:10:37.256 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:10:37.256 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.256 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.256 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.256 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:10:37.256 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:10:37.256 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.256 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.256 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.256 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:10:37.256 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:10:37.256 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.256 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.256 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.256 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:10:37.256 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:10:37.256 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.256 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.256 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.256 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:10:37.256 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:10:37.256 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.256 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.256 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.256 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:10:37.256 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:10:37.256 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.256 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.256 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.256 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:10:37.256 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:10:37.256 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.256 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.256 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.256 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:10:37.256 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:10:37.256 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.256 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.256 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.256 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:10:37.256 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:10:37.256 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.256 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.256 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.256 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:10:37.256 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:10:37.256 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.256 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.256 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:37.256 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:10:37.256 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:10:37.256 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.256 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.256 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:37.256 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:10:37.256 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:10:37.256 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.256 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.256 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:37.256 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:10:37.256 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:10:37.256 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.256 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.256 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.256 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:10:37.256 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:10:37.256 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.256 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.256 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.256 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:10:37.256 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:10:37.256 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.256 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.256 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.256 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:10:37.256 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:10:37.256 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.256 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.256 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.256 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:10:37.256 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:10:37.256 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.256 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.256 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.256 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:10:37.256 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:10:37.256 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.256 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.256 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:37.256 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:10:37.256 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:10:37.256 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.256 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.256 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:37.256 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:10:37.256 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:10:37.256 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.256 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.256 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:37.256 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:37.256 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:37.256 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.256 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.256 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:37.256 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:37.256 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:37.256 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.256 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.256 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:37.256 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:37.256 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:37.256 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.256 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.256 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:37.256 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:37.256 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:37.256 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.256 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.256 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:37.256 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:37.256 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:37.256 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.256 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.256 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:37.256 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:37.256 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:37.256 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.256 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.256 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:37.256 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:37.256 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:37.256 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.256 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.256 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:37.256 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:37.256 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:37.256 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.256 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.256 04:27:33 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:10:37.256 04:27:33 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:37.256 04:27:33 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:10:37.256 04:27:33 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:10:37.256 04:27:33 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:10:37.256 04:27:33 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:10:37.256 04:27:33 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:37.256 04:27:33 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:10:37.256 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.256 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.256 04:27:33 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:10:37.256 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:37.256 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.256 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.256 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:37.256 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:10:37.256 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:10:37.256 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.256 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.256 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:37.256 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:10:37.256 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:10:37.256 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.256 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.256 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:37.256 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:10:37.256 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:10:37.256 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.256 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.256 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:37.256 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:10:37.256 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:10:37.256 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.256 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.256 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:37.256 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:10:37.256 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:10:37.256 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.256 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.256 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:37.256 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:10:37.256 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:10:37.256 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.256 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.256 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:37.256 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:10:37.256 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:10:37.256 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.256 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.256 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:37.256 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:10:37.256 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:10:37.256 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.256 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.256 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.256 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:10:37.256 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:10:37.256 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.256 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.256 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.256 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:10:37.256 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:10:37.256 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.256 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.256 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.256 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:10:37.256 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:10:37.256 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.256 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.256 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.256 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:10:37.256 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:10:37.256 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.256 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.256 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:37.256 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:10:37.256 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:10:37.256 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.256 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.256 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.256 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:10:37.256 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:10:37.256 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.256 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.256 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.256 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:10:37.256 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:10:37.256 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.256 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.256 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.256 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:10:37.256 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:10:37.256 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.256 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.256 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.256 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:10:37.256 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:10:37.256 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.256 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.256 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.257 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:10:37.257 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:10:37.257 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.257 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.257 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.257 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:10:37.257 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:10:37.257 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.257 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.257 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.257 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:10:37.257 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:10:37.257 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.257 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.257 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.257 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:10:37.257 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:10:37.257 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.257 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.257 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.257 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:10:37.257 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:10:37.257 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.257 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.257 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.257 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:10:37.257 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:10:37.257 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.257 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.257 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.257 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:10:37.257 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:10:37.257 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.257 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.257 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.257 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:10:37.257 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:10:37.257 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.257 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.257 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.257 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:10:37.257 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:10:37.257 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.257 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.257 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:37.257 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:10:37.257 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:10:37.257 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.257 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.257 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:37.257 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:10:37.257 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:10:37.257 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.257 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.257 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:37.257 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:10:37.257 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:10:37.257 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.257 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.257 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.257 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:10:37.257 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:10:37.257 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.257 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.257 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.257 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:10:37.257 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:10:37.257 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.257 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.257 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.257 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:10:37.257 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:10:37.257 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.257 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.257 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.257 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:10:37.257 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:10:37.257 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.257 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.257 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.257 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:10:37.257 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:10:37.257 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.257 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.257 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:37.257 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:10:37.257 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:10:37.257 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.257 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.257 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:37.257 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:10:37.257 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:10:37.257 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.257 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.257 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:37.257 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:37.257 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:37.257 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.257 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.257 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:37.257 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:37.257 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:37.257 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.257 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.257 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:37.257 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:37.257 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:37.257 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.257 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.257 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:37.257 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:37.257 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:37.257 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.257 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.257 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:37.257 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:37.257 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:37.257 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.257 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.257 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:37.257 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:37.257 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:37.257 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.257 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.257 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:37.257 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:37.257 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:37.257 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.257 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.257 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:37.257 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:37.257 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:37.257 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.257 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.257 04:27:33 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:10:37.257 04:27:33 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:10:37.257 04:27:33 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:10:37.257 04:27:33 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:10:37.257 04:27:33 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:10:37.257 04:27:33 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:37.257 04:27:33 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:10:37.257 04:27:33 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:10:37.257 04:27:33 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:10:37.257 04:27:33 nvme_fdp -- scripts/common.sh@18 -- # local i 00:10:37.257 04:27:33 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:10:37.257 04:27:33 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:37.257 04:27:33 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:10:37.257 04:27:33 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:10:37.257 04:27:33 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:10:37.257 04:27:33 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:10:37.257 04:27:33 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:37.257 04:27:33 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:10:37.257 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.257 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.257 04:27:33 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:10:37.257 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:37.257 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.257 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.257 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:37.257 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:10:37.257 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:10:37.257 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.257 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.257 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:37.257 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:10:37.257 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:10:37.257 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.257 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.257 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:10:37.257 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:10:37.257 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:10:37.257 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.257 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.257 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:37.257 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:10:37.257 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:10:37.257 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.257 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.257 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:37.257 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:10:37.257 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:10:37.257 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.257 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.257 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:37.257 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:10:37.257 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:10:37.257 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.257 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.257 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:37.257 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:10:37.257 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:10:37.257 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.257 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.257 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:10:37.257 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:10:37.257 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:10:37.257 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.257 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.257 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:37.257 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:10:37.257 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:10:37.257 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.257 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.257 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.257 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:10:37.257 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:10:37.257 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.257 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.257 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:37.257 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:10:37.257 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:10:37.257 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.257 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.257 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.257 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:10:37.257 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:10:37.257 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.257 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.257 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.257 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:10:37.258 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:10:37.258 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.258 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.258 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:37.258 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:10:37.258 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:10:37.258 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.258 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.258 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:10:37.258 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:10:37.258 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:10:37.258 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.258 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.258 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.258 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:10:37.258 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:10:37.258 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.258 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.258 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:37.258 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:10:37.258 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:10:37.258 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.258 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.258 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:37.258 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:37.258 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:10:37.258 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.258 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.258 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.258 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:10:37.258 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:10:37.258 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.258 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.258 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.258 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:10:37.258 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:10:37.258 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.258 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.258 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.258 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:10:37.258 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:10:37.258 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.258 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.258 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.258 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:10:37.258 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:10:37.258 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.258 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.258 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.258 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:10:37.258 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:10:37.258 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.258 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.258 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.258 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:10:37.258 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:10:37.258 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.258 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.258 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:37.258 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:10:37.258 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:10:37.258 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.258 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.258 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:37.258 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:10:37.258 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:10:37.258 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.258 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.258 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:37.258 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:10:37.258 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:10:37.258 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.258 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.258 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:37.258 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:10:37.258 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:10:37.258 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.258 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.258 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:37.258 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:10:37.258 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:10:37.258 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.258 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.258 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.258 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:10:37.258 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:10:37.258 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.258 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.258 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.258 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:10:37.258 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:10:37.258 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.258 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.258 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.258 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:10:37.258 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:10:37.258 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.258 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.258 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.258 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:10:37.258 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:10:37.258 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.258 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.258 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:37.258 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:10:37.258 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:10:37.258 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.258 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.258 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:37.258 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:10:37.258 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:10:37.258 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.258 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.258 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.258 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:10:37.258 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:10:37.258 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.258 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.258 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.258 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:10:37.258 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:10:37.258 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.258 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.258 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.258 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:10:37.258 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:10:37.258 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.258 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.258 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.258 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:10:37.258 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:10:37.258 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.258 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.258 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.258 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:10:37.258 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:10:37.258 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.258 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.258 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.258 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:10:37.258 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:10:37.258 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.258 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.258 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.258 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:10:37.258 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:10:37.258 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.258 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.258 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.258 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:10:37.258 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:10:37.258 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.258 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.258 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.258 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:10:37.258 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:10:37.258 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.258 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.258 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.258 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:10:37.258 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:10:37.258 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.258 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.258 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.258 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:10:37.258 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:10:37.258 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.258 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.258 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.258 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:10:37.258 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:10:37.258 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.258 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.258 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.258 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:10:37.258 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:10:37.258 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.258 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.258 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.258 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:10:37.258 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:10:37.258 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.258 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.258 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.258 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:10:37.258 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:10:37.258 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.258 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.258 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.258 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:10:37.258 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:10:37.258 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.258 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.258 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.258 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:10:37.258 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:10:37.258 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.258 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.258 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:37.258 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:10:37.258 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:10:37.258 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.258 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.258 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.258 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:10:37.258 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:10:37.258 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.258 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.258 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.258 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:10:37.258 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:10:37.258 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.258 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.258 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.258 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:10:37.258 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:10:37.258 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.258 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.258 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.258 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:10:37.258 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:10:37.258 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.258 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.258 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.258 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:10:37.258 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:10:37.258 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.258 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.258 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.258 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:10:37.258 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:10:37.258 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.258 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.258 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.258 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:10:37.258 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:10:37.258 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.258 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.258 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:37.258 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:10:37.258 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:10:37.258 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.258 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.258 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:37.258 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:10:37.258 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:10:37.258 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.258 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.258 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.258 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:10:37.258 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:10:37.259 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.259 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.259 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:37.259 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:10:37.259 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:10:37.259 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.259 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.259 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:37.259 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:10:37.259 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:10:37.259 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.259 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.259 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.259 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:10:37.259 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:10:37.259 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.259 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.259 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.259 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:10:37.259 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:10:37.259 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.259 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.259 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:37.259 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:10:37.259 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:10:37.259 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.259 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.259 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.259 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:10:37.259 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:10:37.259 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.259 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.259 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.259 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:10:37.259 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:10:37.259 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.259 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.259 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.259 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:10:37.259 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:10:37.259 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.259 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.259 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.259 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:10:37.259 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:10:37.259 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.259 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.259 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.259 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:10:37.259 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:10:37.259 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.259 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.259 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:37.259 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:10:37.259 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:10:37.259 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.259 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.259 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:37.259 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:10:37.259 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:10:37.259 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.259 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.259 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.259 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:10:37.259 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:10:37.259 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.259 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.259 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.259 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:10:37.259 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:10:37.259 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.259 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.259 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.259 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:10:37.259 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:10:37.259 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.259 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.259 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:10:37.259 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:10:37.259 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:10:37.259 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.259 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.259 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.259 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:10:37.259 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:10:37.259 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.259 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.259 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.259 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:10:37.259 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:10:37.259 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.259 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.259 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.259 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:10:37.259 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:10:37.259 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.259 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.259 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.259 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:10:37.259 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:10:37.259 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.259 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.259 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.259 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:10:37.259 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:10:37.259 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.259 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.259 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.259 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:10:37.259 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:10:37.259 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.259 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.259 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:37.259 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:37.259 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:37.259 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.259 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.259 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:37.259 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:37.259 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:37.259 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.259 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.259 04:27:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:37.259 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:10:37.259 04:27:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:10:37.259 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.259 04:27:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.259 04:27:33 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:10:37.259 04:27:33 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:10:37.259 04:27:33 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:10:37.259 04:27:33 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:10:37.259 04:27:33 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:10:37.259 04:27:33 nvme_fdp -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:10:37.259 04:27:33 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # get_ctrl_with_feature fdp 00:10:37.259 04:27:33 nvme_fdp -- nvme/functions.sh@204 -- # local _ctrls feature=fdp 00:10:37.259 04:27:33 nvme_fdp -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:10:37.259 04:27:33 nvme_fdp -- nvme/functions.sh@206 -- # get_ctrls_with_feature fdp 00:10:37.259 04:27:33 nvme_fdp -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:10:37.259 04:27:33 nvme_fdp -- nvme/functions.sh@194 -- # local ctrl feature=fdp 00:10:37.259 04:27:33 nvme_fdp -- nvme/functions.sh@196 -- # type -t ctrl_has_fdp 00:10:37.259 04:27:33 nvme_fdp -- nvme/functions.sh@196 -- # [[ function == function ]] 00:10:37.259 04:27:33 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:10:37.259 04:27:33 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme1 00:10:37.259 04:27:33 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme1 ctratt 00:10:37.259 04:27:33 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme1 00:10:37.259 04:27:33 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme1 00:10:37.259 04:27:33 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme1 ctratt 00:10:37.259 04:27:33 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=ctratt 00:10:37.259 04:27:33 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:10:37.259 04:27:33 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:10:37.259 04:27:33 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:10:37.259 04:27:33 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:10:37.259 04:27:33 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:10:37.259 04:27:33 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:10:37.259 04:27:33 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:10:37.259 04:27:33 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme0 00:10:37.259 04:27:33 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme0 ctratt 00:10:37.259 04:27:33 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme0 00:10:37.259 04:27:33 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme0 00:10:37.259 04:27:33 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme0 ctratt 00:10:37.259 04:27:33 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=ctratt 00:10:37.259 04:27:33 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:10:37.259 04:27:33 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:10:37.259 04:27:33 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:10:37.259 04:27:33 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:10:37.259 04:27:33 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:10:37.259 04:27:33 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:10:37.259 04:27:33 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:10:37.259 04:27:33 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme3 00:10:37.259 04:27:33 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme3 ctratt 00:10:37.259 04:27:33 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme3 00:10:37.259 04:27:33 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme3 00:10:37.259 04:27:33 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme3 ctratt 00:10:37.259 04:27:33 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=ctratt 00:10:37.259 04:27:33 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:10:37.259 04:27:33 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:10:37.259 04:27:33 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x88010 ]] 00:10:37.259 04:27:33 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x88010 00:10:37.259 04:27:33 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x88010 00:10:37.259 04:27:33 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:10:37.259 04:27:33 nvme_fdp -- nvme/functions.sh@199 -- # echo nvme3 00:10:37.259 04:27:33 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:10:37.259 04:27:33 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme2 00:10:37.259 04:27:33 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme2 ctratt 00:10:37.259 04:27:33 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme2 00:10:37.259 04:27:33 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme2 00:10:37.259 04:27:33 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme2 ctratt 00:10:37.259 04:27:33 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=ctratt 00:10:37.259 04:27:33 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:10:37.259 04:27:33 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:10:37.259 04:27:33 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:10:37.259 04:27:33 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:10:37.259 04:27:33 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:10:37.259 04:27:33 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:10:37.259 04:27:33 nvme_fdp -- nvme/functions.sh@207 -- # (( 1 > 0 )) 00:10:37.259 04:27:33 nvme_fdp -- nvme/functions.sh@208 -- # echo nvme3 00:10:37.259 04:27:33 nvme_fdp -- nvme/functions.sh@209 -- # return 0 00:10:37.259 04:27:33 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # ctrl=nvme3 00:10:37.259 04:27:33 nvme_fdp -- nvme/nvme_fdp.sh@14 -- # bdf=0000:00:13.0 00:10:37.259 04:27:33 nvme_fdp -- nvme/nvme_fdp.sh@16 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:10:37.516 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:38.081 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:10:38.081 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:10:38.081 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:10:38.081 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:10:38.081 04:27:34 nvme_fdp -- nvme/nvme_fdp.sh@18 -- # run_test nvme_flexible_data_placement /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:10:38.081 04:27:34 nvme_fdp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:38.081 04:27:34 nvme_fdp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:38.081 04:27:34 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:10:38.081 ************************************ 00:10:38.081 START TEST nvme_flexible_data_placement 00:10:38.081 ************************************ 00:10:38.081 04:27:34 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:10:38.396 Initializing NVMe Controllers 00:10:38.396 Attaching to 0000:00:13.0 00:10:38.396 Controller supports FDP Attached to 0000:00:13.0 00:10:38.396 Namespace ID: 1 Endurance Group ID: 1 00:10:38.396 Initialization complete. 00:10:38.396 00:10:38.396 ================================== 00:10:38.396 == FDP tests for Namespace: #01 == 00:10:38.396 ================================== 00:10:38.396 00:10:38.396 Get Feature: FDP: 00:10:38.396 ================= 00:10:38.396 Enabled: Yes 00:10:38.396 FDP configuration Index: 0 00:10:38.396 00:10:38.396 FDP configurations log page 00:10:38.396 =========================== 00:10:38.396 Number of FDP configurations: 1 00:10:38.396 Version: 0 00:10:38.396 Size: 112 00:10:38.396 FDP Configuration Descriptor: 0 00:10:38.396 Descriptor Size: 96 00:10:38.396 Reclaim Group Identifier format: 2 00:10:38.396 FDP Volatile Write Cache: Not Present 00:10:38.396 FDP Configuration: Valid 00:10:38.396 Vendor Specific Size: 0 00:10:38.396 Number of Reclaim Groups: 2 00:10:38.396 Number of Recalim Unit Handles: 8 00:10:38.396 Max Placement Identifiers: 128 00:10:38.396 Number of Namespaces Suppprted: 256 00:10:38.396 Reclaim unit Nominal Size: 6000000 bytes 00:10:38.396 Estimated Reclaim Unit Time Limit: Not Reported 00:10:38.396 RUH Desc #000: RUH Type: Initially Isolated 00:10:38.396 RUH Desc #001: RUH Type: Initially Isolated 00:10:38.396 RUH Desc #002: RUH Type: Initially Isolated 00:10:38.396 RUH Desc #003: RUH Type: Initially Isolated 00:10:38.396 RUH Desc #004: RUH Type: Initially Isolated 00:10:38.396 RUH Desc #005: RUH Type: Initially Isolated 00:10:38.396 RUH Desc #006: RUH Type: Initially Isolated 00:10:38.396 RUH Desc #007: RUH Type: Initially Isolated 00:10:38.396 00:10:38.396 FDP reclaim unit handle usage log page 00:10:38.396 ====================================== 00:10:38.396 Number of Reclaim Unit Handles: 8 00:10:38.396 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:10:38.396 RUH Usage Desc #001: RUH Attributes: Unused 00:10:38.396 RUH Usage Desc #002: RUH Attributes: Unused 00:10:38.396 RUH Usage Desc #003: RUH Attributes: Unused 00:10:38.396 RUH Usage Desc #004: RUH Attributes: Unused 00:10:38.396 RUH Usage Desc #005: RUH Attributes: Unused 00:10:38.396 RUH Usage Desc #006: RUH Attributes: Unused 00:10:38.396 RUH Usage Desc #007: RUH Attributes: Unused 00:10:38.396 00:10:38.396 FDP statistics log page 00:10:38.396 ======================= 00:10:38.396 Host bytes with metadata written: 953712640 00:10:38.396 Media bytes with metadata written: 953827328 00:10:38.396 Media bytes erased: 0 00:10:38.396 00:10:38.396 FDP Reclaim unit handle status 00:10:38.396 ============================== 00:10:38.396 Number of RUHS descriptors: 2 00:10:38.396 RUHS Desc: #0000 PID: 0x0000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000003278 00:10:38.396 RUHS Desc: #0001 PID: 0x4000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000006000 00:10:38.396 00:10:38.396 FDP write on placement id: 0 success 00:10:38.396 00:10:38.396 Set Feature: Enabling FDP events on Placement handle: #0 Success 00:10:38.396 00:10:38.396 IO mgmt send: RUH update for Placement ID: #0 Success 00:10:38.396 00:10:38.396 Get Feature: FDP Events for Placement handle: #0 00:10:38.396 ======================== 00:10:38.396 Number of FDP Events: 6 00:10:38.396 FDP Event: #0 Type: RU Not Written to Capacity Enabled: Yes 00:10:38.396 FDP Event: #1 Type: RU Time Limit Exceeded Enabled: Yes 00:10:38.396 FDP Event: #2 Type: Ctrlr Reset Modified RUH's Enabled: Yes 00:10:38.397 FDP Event: #3 Type: Invalid Placement Identifier Enabled: Yes 00:10:38.397 FDP Event: #4 Type: Media Reallocated Enabled: No 00:10:38.397 FDP Event: #5 Type: Implicitly modified RUH Enabled: No 00:10:38.397 00:10:38.397 FDP events log page 00:10:38.397 =================== 00:10:38.397 Number of FDP events: 1 00:10:38.397 FDP Event #0: 00:10:38.397 Event Type: RU Not Written to Capacity 00:10:38.397 Placement Identifier: Valid 00:10:38.397 NSID: Valid 00:10:38.397 Location: Valid 00:10:38.397 Placement Identifier: 0 00:10:38.397 Event Timestamp: 6 00:10:38.397 Namespace Identifier: 1 00:10:38.397 Reclaim Group Identifier: 0 00:10:38.397 Reclaim Unit Handle Identifier: 0 00:10:38.397 00:10:38.397 FDP test passed 00:10:38.397 00:10:38.397 real 0m0.226s 00:10:38.397 user 0m0.070s 00:10:38.397 sys 0m0.056s 00:10:38.397 04:27:34 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:38.397 04:27:34 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@10 -- # set +x 00:10:38.397 ************************************ 00:10:38.397 END TEST nvme_flexible_data_placement 00:10:38.397 ************************************ 00:10:38.397 00:10:38.397 real 0m7.458s 00:10:38.397 user 0m1.048s 00:10:38.397 sys 0m1.380s 00:10:38.397 04:27:34 nvme_fdp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:38.397 04:27:34 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:10:38.397 ************************************ 00:10:38.397 END TEST nvme_fdp 00:10:38.397 ************************************ 00:10:38.397 04:27:34 -- spdk/autotest.sh@232 -- # [[ '' -eq 1 ]] 00:10:38.397 04:27:34 -- spdk/autotest.sh@236 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:10:38.397 04:27:34 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:38.397 04:27:34 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:38.397 04:27:34 -- common/autotest_common.sh@10 -- # set +x 00:10:38.397 ************************************ 00:10:38.397 START TEST nvme_rpc 00:10:38.397 ************************************ 00:10:38.397 04:27:34 nvme_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:10:38.656 * Looking for test storage... 00:10:38.656 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:10:38.656 04:27:34 nvme_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:38.656 04:27:34 nvme_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:10:38.656 04:27:34 nvme_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:38.656 04:27:35 nvme_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:38.656 04:27:35 nvme_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:38.656 04:27:35 nvme_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:38.656 04:27:35 nvme_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:38.656 04:27:35 nvme_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:10:38.656 04:27:35 nvme_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:10:38.656 04:27:35 nvme_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:10:38.656 04:27:35 nvme_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:10:38.656 04:27:35 nvme_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:10:38.656 04:27:35 nvme_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:10:38.656 04:27:35 nvme_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:10:38.656 04:27:35 nvme_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:38.656 04:27:35 nvme_rpc -- scripts/common.sh@344 -- # case "$op" in 00:10:38.656 04:27:35 nvme_rpc -- scripts/common.sh@345 -- # : 1 00:10:38.656 04:27:35 nvme_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:38.656 04:27:35 nvme_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:38.656 04:27:35 nvme_rpc -- scripts/common.sh@365 -- # decimal 1 00:10:38.656 04:27:35 nvme_rpc -- scripts/common.sh@353 -- # local d=1 00:10:38.656 04:27:35 nvme_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:38.656 04:27:35 nvme_rpc -- scripts/common.sh@355 -- # echo 1 00:10:38.656 04:27:35 nvme_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:10:38.656 04:27:35 nvme_rpc -- scripts/common.sh@366 -- # decimal 2 00:10:38.656 04:27:35 nvme_rpc -- scripts/common.sh@353 -- # local d=2 00:10:38.656 04:27:35 nvme_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:38.656 04:27:35 nvme_rpc -- scripts/common.sh@355 -- # echo 2 00:10:38.656 04:27:35 nvme_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:10:38.656 04:27:35 nvme_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:38.656 04:27:35 nvme_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:38.656 04:27:35 nvme_rpc -- scripts/common.sh@368 -- # return 0 00:10:38.656 04:27:35 nvme_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:38.656 04:27:35 nvme_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:38.656 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:38.656 --rc genhtml_branch_coverage=1 00:10:38.656 --rc genhtml_function_coverage=1 00:10:38.656 --rc genhtml_legend=1 00:10:38.656 --rc geninfo_all_blocks=1 00:10:38.656 --rc geninfo_unexecuted_blocks=1 00:10:38.656 00:10:38.656 ' 00:10:38.656 04:27:35 nvme_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:38.656 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:38.656 --rc genhtml_branch_coverage=1 00:10:38.656 --rc genhtml_function_coverage=1 00:10:38.656 --rc genhtml_legend=1 00:10:38.656 --rc geninfo_all_blocks=1 00:10:38.656 --rc geninfo_unexecuted_blocks=1 00:10:38.656 00:10:38.656 ' 00:10:38.656 04:27:35 nvme_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:38.656 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:38.656 --rc genhtml_branch_coverage=1 00:10:38.656 --rc genhtml_function_coverage=1 00:10:38.656 --rc genhtml_legend=1 00:10:38.656 --rc geninfo_all_blocks=1 00:10:38.656 --rc geninfo_unexecuted_blocks=1 00:10:38.656 00:10:38.656 ' 00:10:38.656 04:27:35 nvme_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:38.656 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:38.656 --rc genhtml_branch_coverage=1 00:10:38.656 --rc genhtml_function_coverage=1 00:10:38.656 --rc genhtml_legend=1 00:10:38.656 --rc geninfo_all_blocks=1 00:10:38.656 --rc geninfo_unexecuted_blocks=1 00:10:38.656 00:10:38.656 ' 00:10:38.656 04:27:35 nvme_rpc -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:38.656 04:27:35 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:10:38.656 04:27:35 nvme_rpc -- common/autotest_common.sh@1509 -- # bdfs=() 00:10:38.656 04:27:35 nvme_rpc -- common/autotest_common.sh@1509 -- # local bdfs 00:10:38.656 04:27:35 nvme_rpc -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:10:38.656 04:27:35 nvme_rpc -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:10:38.656 04:27:35 nvme_rpc -- common/autotest_common.sh@1498 -- # bdfs=() 00:10:38.656 04:27:35 nvme_rpc -- common/autotest_common.sh@1498 -- # local bdfs 00:10:38.656 04:27:35 nvme_rpc -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:10:38.656 04:27:35 nvme_rpc -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:10:38.656 04:27:35 nvme_rpc -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:10:38.656 04:27:35 nvme_rpc -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:10:38.656 04:27:35 nvme_rpc -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:10:38.656 04:27:35 nvme_rpc -- common/autotest_common.sh@1512 -- # echo 0000:00:10.0 00:10:38.656 04:27:35 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:10.0 00:10:38.656 04:27:35 nvme_rpc -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=65745 00:10:38.656 04:27:35 nvme_rpc -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:10:38.656 04:27:35 nvme_rpc -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:10:38.656 04:27:35 nvme_rpc -- nvme/nvme_rpc.sh@19 -- # waitforlisten 65745 00:10:38.656 04:27:35 nvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 65745 ']' 00:10:38.656 04:27:35 nvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:38.656 04:27:35 nvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:38.656 04:27:35 nvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:38.656 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:38.656 04:27:35 nvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:38.656 04:27:35 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:38.657 [2024-11-27 04:27:35.203416] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:10:38.657 [2024-11-27 04:27:35.203545] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65745 ] 00:10:38.915 [2024-11-27 04:27:35.364832] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:38.915 [2024-11-27 04:27:35.466273] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:38.915 [2024-11-27 04:27:35.466505] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:39.849 04:27:36 nvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:39.849 04:27:36 nvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:10:39.849 04:27:36 nvme_rpc -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:10:39.849 Nvme0n1 00:10:39.849 04:27:36 nvme_rpc -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:10:39.849 04:27:36 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:10:40.109 request: 00:10:40.109 { 00:10:40.109 "bdev_name": "Nvme0n1", 00:10:40.109 "filename": "non_existing_file", 00:10:40.109 "method": "bdev_nvme_apply_firmware", 00:10:40.109 "req_id": 1 00:10:40.109 } 00:10:40.109 Got JSON-RPC error response 00:10:40.109 response: 00:10:40.109 { 00:10:40.109 "code": -32603, 00:10:40.109 "message": "open file failed." 00:10:40.109 } 00:10:40.109 04:27:36 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # rv=1 00:10:40.109 04:27:36 nvme_rpc -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:10:40.109 04:27:36 nvme_rpc -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:10:40.367 04:27:36 nvme_rpc -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:10:40.367 04:27:36 nvme_rpc -- nvme/nvme_rpc.sh@40 -- # killprocess 65745 00:10:40.367 04:27:36 nvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 65745 ']' 00:10:40.367 04:27:36 nvme_rpc -- common/autotest_common.sh@958 -- # kill -0 65745 00:10:40.367 04:27:36 nvme_rpc -- common/autotest_common.sh@959 -- # uname 00:10:40.367 04:27:36 nvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:40.367 04:27:36 nvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65745 00:10:40.367 04:27:36 nvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:40.367 04:27:36 nvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:40.367 killing process with pid 65745 00:10:40.367 04:27:36 nvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65745' 00:10:40.367 04:27:36 nvme_rpc -- common/autotest_common.sh@973 -- # kill 65745 00:10:40.367 04:27:36 nvme_rpc -- common/autotest_common.sh@978 -- # wait 65745 00:10:41.802 00:10:41.802 real 0m3.377s 00:10:41.802 user 0m6.423s 00:10:41.802 sys 0m0.509s 00:10:41.802 04:27:38 nvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:41.802 04:27:38 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:41.802 ************************************ 00:10:41.802 END TEST nvme_rpc 00:10:41.802 ************************************ 00:10:41.802 04:27:38 -- spdk/autotest.sh@237 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:10:41.802 04:27:38 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:41.802 04:27:38 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:41.802 04:27:38 -- common/autotest_common.sh@10 -- # set +x 00:10:41.802 ************************************ 00:10:41.802 START TEST nvme_rpc_timeouts 00:10:41.802 ************************************ 00:10:41.802 04:27:38 nvme_rpc_timeouts -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:10:42.080 * Looking for test storage... 00:10:42.080 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:10:42.080 04:27:38 nvme_rpc_timeouts -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:42.080 04:27:38 nvme_rpc_timeouts -- common/autotest_common.sh@1693 -- # lcov --version 00:10:42.080 04:27:38 nvme_rpc_timeouts -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:42.080 04:27:38 nvme_rpc_timeouts -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:42.080 04:27:38 nvme_rpc_timeouts -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:42.080 04:27:38 nvme_rpc_timeouts -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:42.080 04:27:38 nvme_rpc_timeouts -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:42.080 04:27:38 nvme_rpc_timeouts -- scripts/common.sh@336 -- # IFS=.-: 00:10:42.080 04:27:38 nvme_rpc_timeouts -- scripts/common.sh@336 -- # read -ra ver1 00:10:42.080 04:27:38 nvme_rpc_timeouts -- scripts/common.sh@337 -- # IFS=.-: 00:10:42.080 04:27:38 nvme_rpc_timeouts -- scripts/common.sh@337 -- # read -ra ver2 00:10:42.080 04:27:38 nvme_rpc_timeouts -- scripts/common.sh@338 -- # local 'op=<' 00:10:42.080 04:27:38 nvme_rpc_timeouts -- scripts/common.sh@340 -- # ver1_l=2 00:10:42.080 04:27:38 nvme_rpc_timeouts -- scripts/common.sh@341 -- # ver2_l=1 00:10:42.080 04:27:38 nvme_rpc_timeouts -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:42.080 04:27:38 nvme_rpc_timeouts -- scripts/common.sh@344 -- # case "$op" in 00:10:42.080 04:27:38 nvme_rpc_timeouts -- scripts/common.sh@345 -- # : 1 00:10:42.080 04:27:38 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:42.081 04:27:38 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:42.081 04:27:38 nvme_rpc_timeouts -- scripts/common.sh@365 -- # decimal 1 00:10:42.081 04:27:38 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=1 00:10:42.081 04:27:38 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:42.081 04:27:38 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 1 00:10:42.081 04:27:38 nvme_rpc_timeouts -- scripts/common.sh@365 -- # ver1[v]=1 00:10:42.081 04:27:38 nvme_rpc_timeouts -- scripts/common.sh@366 -- # decimal 2 00:10:42.081 04:27:38 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=2 00:10:42.081 04:27:38 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:42.081 04:27:38 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 2 00:10:42.081 04:27:38 nvme_rpc_timeouts -- scripts/common.sh@366 -- # ver2[v]=2 00:10:42.081 04:27:38 nvme_rpc_timeouts -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:42.081 04:27:38 nvme_rpc_timeouts -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:42.081 04:27:38 nvme_rpc_timeouts -- scripts/common.sh@368 -- # return 0 00:10:42.081 04:27:38 nvme_rpc_timeouts -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:42.081 04:27:38 nvme_rpc_timeouts -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:42.081 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:42.081 --rc genhtml_branch_coverage=1 00:10:42.081 --rc genhtml_function_coverage=1 00:10:42.081 --rc genhtml_legend=1 00:10:42.081 --rc geninfo_all_blocks=1 00:10:42.081 --rc geninfo_unexecuted_blocks=1 00:10:42.081 00:10:42.081 ' 00:10:42.081 04:27:38 nvme_rpc_timeouts -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:42.081 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:42.081 --rc genhtml_branch_coverage=1 00:10:42.081 --rc genhtml_function_coverage=1 00:10:42.081 --rc genhtml_legend=1 00:10:42.081 --rc geninfo_all_blocks=1 00:10:42.081 --rc geninfo_unexecuted_blocks=1 00:10:42.081 00:10:42.081 ' 00:10:42.081 04:27:38 nvme_rpc_timeouts -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:42.081 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:42.081 --rc genhtml_branch_coverage=1 00:10:42.081 --rc genhtml_function_coverage=1 00:10:42.081 --rc genhtml_legend=1 00:10:42.081 --rc geninfo_all_blocks=1 00:10:42.081 --rc geninfo_unexecuted_blocks=1 00:10:42.081 00:10:42.081 ' 00:10:42.081 04:27:38 nvme_rpc_timeouts -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:42.081 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:42.081 --rc genhtml_branch_coverage=1 00:10:42.081 --rc genhtml_function_coverage=1 00:10:42.081 --rc genhtml_legend=1 00:10:42.081 --rc geninfo_all_blocks=1 00:10:42.081 --rc geninfo_unexecuted_blocks=1 00:10:42.081 00:10:42.081 ' 00:10:42.081 04:27:38 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:42.081 04:27:38 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_65810 00:10:42.081 04:27:38 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_65810 00:10:42.081 04:27:38 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=65842 00:10:42.081 04:27:38 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:10:42.081 04:27:38 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 65842 00:10:42.081 04:27:38 nvme_rpc_timeouts -- common/autotest_common.sh@835 -- # '[' -z 65842 ']' 00:10:42.081 04:27:38 nvme_rpc_timeouts -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:42.081 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:42.081 04:27:38 nvme_rpc_timeouts -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:42.081 04:27:38 nvme_rpc_timeouts -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:42.081 04:27:38 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:10:42.081 04:27:38 nvme_rpc_timeouts -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:42.081 04:27:38 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:10:42.081 [2024-11-27 04:27:38.568379] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:10:42.081 [2024-11-27 04:27:38.568510] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65842 ] 00:10:42.340 [2024-11-27 04:27:38.728192] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:42.340 [2024-11-27 04:27:38.830494] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:42.340 [2024-11-27 04:27:38.830683] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:42.905 04:27:39 nvme_rpc_timeouts -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:42.905 Checking default timeout settings: 00:10:42.905 04:27:39 nvme_rpc_timeouts -- common/autotest_common.sh@868 -- # return 0 00:10:42.905 04:27:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:10:42.905 04:27:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:10:43.471 Making settings changes with rpc: 00:10:43.471 04:27:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:10:43.471 04:27:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:10:43.471 Check default vs. modified settings: 00:10:43.471 04:27:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:10:43.471 04:27:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:10:43.730 04:27:40 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:10:43.730 04:27:40 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:10:43.988 04:27:40 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:10:43.988 04:27:40 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_65810 00:10:43.988 04:27:40 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:10:43.988 04:27:40 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:10:43.988 04:27:40 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:10:43.988 04:27:40 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:10:43.988 04:27:40 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_65810 00:10:43.988 04:27:40 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:10:43.988 04:27:40 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:10:43.988 Setting action_on_timeout is changed as expected. 00:10:43.988 04:27:40 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:10:43.988 04:27:40 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:10:43.988 04:27:40 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_65810 00:10:43.988 04:27:40 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:10:43.988 04:27:40 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:10:43.988 04:27:40 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:10:43.988 04:27:40 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:10:43.988 04:27:40 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:10:43.988 04:27:40 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_65810 00:10:43.988 04:27:40 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:10:43.988 04:27:40 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:10:43.989 04:27:40 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:10:43.989 Setting timeout_us is changed as expected. 00:10:43.989 04:27:40 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:10:43.989 04:27:40 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_65810 00:10:43.989 04:27:40 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:10:43.989 04:27:40 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:10:43.989 04:27:40 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:10:43.989 04:27:40 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:10:43.989 04:27:40 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:10:43.989 04:27:40 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_65810 00:10:43.989 04:27:40 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:10:43.989 Setting timeout_admin_us is changed as expected. 00:10:43.989 04:27:40 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:10:43.989 04:27:40 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:10:43.989 04:27:40 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:10:43.989 04:27:40 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_65810 /tmp/settings_modified_65810 00:10:43.989 04:27:40 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 65842 00:10:43.989 04:27:40 nvme_rpc_timeouts -- common/autotest_common.sh@954 -- # '[' -z 65842 ']' 00:10:43.989 04:27:40 nvme_rpc_timeouts -- common/autotest_common.sh@958 -- # kill -0 65842 00:10:43.989 04:27:40 nvme_rpc_timeouts -- common/autotest_common.sh@959 -- # uname 00:10:43.989 04:27:40 nvme_rpc_timeouts -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:43.989 04:27:40 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65842 00:10:43.989 04:27:40 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:43.989 04:27:40 nvme_rpc_timeouts -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:43.989 killing process with pid 65842 00:10:43.989 04:27:40 nvme_rpc_timeouts -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65842' 00:10:43.989 04:27:40 nvme_rpc_timeouts -- common/autotest_common.sh@973 -- # kill 65842 00:10:43.989 04:27:40 nvme_rpc_timeouts -- common/autotest_common.sh@978 -- # wait 65842 00:10:45.361 RPC TIMEOUT SETTING TEST PASSED. 00:10:45.361 04:27:41 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:10:45.361 00:10:45.361 real 0m3.577s 00:10:45.361 user 0m6.966s 00:10:45.361 sys 0m0.497s 00:10:45.361 04:27:41 nvme_rpc_timeouts -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:45.361 ************************************ 00:10:45.361 END TEST nvme_rpc_timeouts 00:10:45.361 04:27:41 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:10:45.361 ************************************ 00:10:45.619 04:27:41 -- spdk/autotest.sh@239 -- # uname -s 00:10:45.619 04:27:41 -- spdk/autotest.sh@239 -- # '[' Linux = Linux ']' 00:10:45.619 04:27:41 -- spdk/autotest.sh@240 -- # run_test sw_hotplug /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:10:45.619 04:27:41 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:45.619 04:27:41 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:45.619 04:27:41 -- common/autotest_common.sh@10 -- # set +x 00:10:45.619 ************************************ 00:10:45.619 START TEST sw_hotplug 00:10:45.619 ************************************ 00:10:45.619 04:27:41 sw_hotplug -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:10:45.619 * Looking for test storage... 00:10:45.619 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:10:45.619 04:27:42 sw_hotplug -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:45.619 04:27:42 sw_hotplug -- common/autotest_common.sh@1693 -- # lcov --version 00:10:45.619 04:27:42 sw_hotplug -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:45.619 04:27:42 sw_hotplug -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:45.619 04:27:42 sw_hotplug -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:45.619 04:27:42 sw_hotplug -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:45.619 04:27:42 sw_hotplug -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:45.619 04:27:42 sw_hotplug -- scripts/common.sh@336 -- # IFS=.-: 00:10:45.619 04:27:42 sw_hotplug -- scripts/common.sh@336 -- # read -ra ver1 00:10:45.619 04:27:42 sw_hotplug -- scripts/common.sh@337 -- # IFS=.-: 00:10:45.619 04:27:42 sw_hotplug -- scripts/common.sh@337 -- # read -ra ver2 00:10:45.619 04:27:42 sw_hotplug -- scripts/common.sh@338 -- # local 'op=<' 00:10:45.619 04:27:42 sw_hotplug -- scripts/common.sh@340 -- # ver1_l=2 00:10:45.619 04:27:42 sw_hotplug -- scripts/common.sh@341 -- # ver2_l=1 00:10:45.619 04:27:42 sw_hotplug -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:45.619 04:27:42 sw_hotplug -- scripts/common.sh@344 -- # case "$op" in 00:10:45.619 04:27:42 sw_hotplug -- scripts/common.sh@345 -- # : 1 00:10:45.619 04:27:42 sw_hotplug -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:45.619 04:27:42 sw_hotplug -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:45.619 04:27:42 sw_hotplug -- scripts/common.sh@365 -- # decimal 1 00:10:45.619 04:27:42 sw_hotplug -- scripts/common.sh@353 -- # local d=1 00:10:45.619 04:27:42 sw_hotplug -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:45.619 04:27:42 sw_hotplug -- scripts/common.sh@355 -- # echo 1 00:10:45.619 04:27:42 sw_hotplug -- scripts/common.sh@365 -- # ver1[v]=1 00:10:45.619 04:27:42 sw_hotplug -- scripts/common.sh@366 -- # decimal 2 00:10:45.619 04:27:42 sw_hotplug -- scripts/common.sh@353 -- # local d=2 00:10:45.619 04:27:42 sw_hotplug -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:45.619 04:27:42 sw_hotplug -- scripts/common.sh@355 -- # echo 2 00:10:45.619 04:27:42 sw_hotplug -- scripts/common.sh@366 -- # ver2[v]=2 00:10:45.619 04:27:42 sw_hotplug -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:45.619 04:27:42 sw_hotplug -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:45.619 04:27:42 sw_hotplug -- scripts/common.sh@368 -- # return 0 00:10:45.619 04:27:42 sw_hotplug -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:45.619 04:27:42 sw_hotplug -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:45.619 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:45.619 --rc genhtml_branch_coverage=1 00:10:45.619 --rc genhtml_function_coverage=1 00:10:45.619 --rc genhtml_legend=1 00:10:45.619 --rc geninfo_all_blocks=1 00:10:45.619 --rc geninfo_unexecuted_blocks=1 00:10:45.619 00:10:45.619 ' 00:10:45.619 04:27:42 sw_hotplug -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:45.619 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:45.619 --rc genhtml_branch_coverage=1 00:10:45.619 --rc genhtml_function_coverage=1 00:10:45.619 --rc genhtml_legend=1 00:10:45.619 --rc geninfo_all_blocks=1 00:10:45.619 --rc geninfo_unexecuted_blocks=1 00:10:45.619 00:10:45.619 ' 00:10:45.619 04:27:42 sw_hotplug -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:45.619 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:45.619 --rc genhtml_branch_coverage=1 00:10:45.619 --rc genhtml_function_coverage=1 00:10:45.619 --rc genhtml_legend=1 00:10:45.619 --rc geninfo_all_blocks=1 00:10:45.619 --rc geninfo_unexecuted_blocks=1 00:10:45.619 00:10:45.619 ' 00:10:45.619 04:27:42 sw_hotplug -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:45.619 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:45.619 --rc genhtml_branch_coverage=1 00:10:45.619 --rc genhtml_function_coverage=1 00:10:45.619 --rc genhtml_legend=1 00:10:45.619 --rc geninfo_all_blocks=1 00:10:45.619 --rc geninfo_unexecuted_blocks=1 00:10:45.619 00:10:45.619 ' 00:10:45.619 04:27:42 sw_hotplug -- nvme/sw_hotplug.sh@129 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:10:45.877 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:46.135 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:10:46.135 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:10:46.135 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:10:46.135 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:10:46.135 04:27:42 sw_hotplug -- nvme/sw_hotplug.sh@131 -- # hotplug_wait=6 00:10:46.135 04:27:42 sw_hotplug -- nvme/sw_hotplug.sh@132 -- # hotplug_events=3 00:10:46.135 04:27:42 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvmes=($(nvme_in_userspace)) 00:10:46.135 04:27:42 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvme_in_userspace 00:10:46.135 04:27:42 sw_hotplug -- scripts/common.sh@312 -- # local bdf bdfs 00:10:46.135 04:27:42 sw_hotplug -- scripts/common.sh@313 -- # local nvmes 00:10:46.135 04:27:42 sw_hotplug -- scripts/common.sh@315 -- # [[ -n '' ]] 00:10:46.135 04:27:42 sw_hotplug -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:10:46.135 04:27:42 sw_hotplug -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:10:46.135 04:27:42 sw_hotplug -- scripts/common.sh@298 -- # local bdf= 00:10:46.135 04:27:42 sw_hotplug -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:10:46.135 04:27:42 sw_hotplug -- scripts/common.sh@233 -- # local class 00:10:46.135 04:27:42 sw_hotplug -- scripts/common.sh@234 -- # local subclass 00:10:46.135 04:27:42 sw_hotplug -- scripts/common.sh@235 -- # local progif 00:10:46.135 04:27:42 sw_hotplug -- scripts/common.sh@236 -- # printf %02x 1 00:10:46.135 04:27:42 sw_hotplug -- scripts/common.sh@236 -- # class=01 00:10:46.135 04:27:42 sw_hotplug -- scripts/common.sh@237 -- # printf %02x 8 00:10:46.135 04:27:42 sw_hotplug -- scripts/common.sh@237 -- # subclass=08 00:10:46.135 04:27:42 sw_hotplug -- scripts/common.sh@238 -- # printf %02x 2 00:10:46.135 04:27:42 sw_hotplug -- scripts/common.sh@238 -- # progif=02 00:10:46.135 04:27:42 sw_hotplug -- scripts/common.sh@240 -- # hash lspci 00:10:46.135 04:27:42 sw_hotplug -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:10:46.135 04:27:42 sw_hotplug -- scripts/common.sh@243 -- # grep -i -- -p02 00:10:46.135 04:27:42 sw_hotplug -- scripts/common.sh@242 -- # lspci -mm -n -D 00:10:46.135 04:27:42 sw_hotplug -- scripts/common.sh@245 -- # tr -d '"' 00:10:46.135 04:27:42 sw_hotplug -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:10:46.135 04:27:42 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:10:46.135 04:27:42 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:10:46.135 04:27:42 sw_hotplug -- scripts/common.sh@18 -- # local i 00:10:46.135 04:27:42 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:10:46.135 04:27:42 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:46.135 04:27:42 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:10:46.135 04:27:42 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:10:46.135 04:27:42 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:10:46.135 04:27:42 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:10:46.135 04:27:42 sw_hotplug -- scripts/common.sh@18 -- # local i 00:10:46.135 04:27:42 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:10:46.135 04:27:42 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:46.135 04:27:42 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:10:46.135 04:27:42 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:10:46.135 04:27:42 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:10:46.135 04:27:42 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:12.0 00:10:46.135 04:27:42 sw_hotplug -- scripts/common.sh@18 -- # local i 00:10:46.135 04:27:42 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:10:46.135 04:27:42 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:46.135 04:27:42 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:10:46.135 04:27:42 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:12.0 00:10:46.135 04:27:42 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:10:46.135 04:27:42 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:13.0 00:10:46.135 04:27:42 sw_hotplug -- scripts/common.sh@18 -- # local i 00:10:46.135 04:27:42 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:10:46.135 04:27:42 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:46.135 04:27:42 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:10:46.135 04:27:42 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:13.0 00:10:46.135 04:27:42 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:10:46.135 04:27:42 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:10:46.135 04:27:42 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:10:46.135 04:27:42 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:10:46.135 04:27:42 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:10:46.135 04:27:42 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:10:46.135 04:27:42 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:10:46.135 04:27:42 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:10:46.135 04:27:42 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:10:46.135 04:27:42 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:10:46.135 04:27:42 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:10:46.135 04:27:42 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:12.0 ]] 00:10:46.135 04:27:42 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:10:46.135 04:27:42 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:10:46.135 04:27:42 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:10:46.135 04:27:42 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:10:46.135 04:27:42 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:13.0 ]] 00:10:46.135 04:27:42 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:10:46.136 04:27:42 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:10:46.136 04:27:42 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:10:46.136 04:27:42 sw_hotplug -- scripts/common.sh@328 -- # (( 4 )) 00:10:46.136 04:27:42 sw_hotplug -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:10:46.136 04:27:42 sw_hotplug -- nvme/sw_hotplug.sh@134 -- # nvme_count=2 00:10:46.136 04:27:42 sw_hotplug -- nvme/sw_hotplug.sh@135 -- # nvmes=("${nvmes[@]::nvme_count}") 00:10:46.136 04:27:42 sw_hotplug -- nvme/sw_hotplug.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:10:46.393 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:46.393 Waiting for block devices as requested 00:10:46.652 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:10:46.652 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:10:46.652 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:10:46.652 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:10:51.915 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:10:51.915 04:27:48 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # PCI_ALLOWED='0000:00:10.0 0000:00:11.0' 00:10:51.915 04:27:48 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:10:52.172 0000:00:03.0 (1af4 1001): Skipping denied controller at 0000:00:03.0 00:10:52.172 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:52.172 0000:00:12.0 (1b36 0010): Skipping denied controller at 0000:00:12.0 00:10:52.429 0000:00:13.0 (1b36 0010): Skipping denied controller at 0000:00:13.0 00:10:52.686 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:10:52.686 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:10:52.686 04:27:49 sw_hotplug -- nvme/sw_hotplug.sh@143 -- # xtrace_disable 00:10:52.686 04:27:49 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:52.686 04:27:49 sw_hotplug -- nvme/sw_hotplug.sh@148 -- # run_hotplug 00:10:52.687 04:27:49 sw_hotplug -- nvme/sw_hotplug.sh@77 -- # trap 'killprocess $hotplug_pid; exit 1' SIGINT SIGTERM EXIT 00:10:52.687 04:27:49 sw_hotplug -- nvme/sw_hotplug.sh@85 -- # hotplug_pid=66699 00:10:52.687 04:27:49 sw_hotplug -- nvme/sw_hotplug.sh@87 -- # debug_remove_attach_helper 3 6 false 00:10:52.687 04:27:49 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:10:52.687 04:27:49 sw_hotplug -- nvme/sw_hotplug.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/examples/hotplug -i 0 -t 0 -n 6 -r 6 -l warning 00:10:52.687 04:27:49 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 false 00:10:52.687 04:27:49 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:10:52.687 04:27:49 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:10:52.687 04:27:49 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:10:52.687 04:27:49 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:10:52.687 04:27:49 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 false 00:10:52.687 04:27:49 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:10:52.687 04:27:49 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:10:52.687 04:27:49 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=false 00:10:52.687 04:27:49 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:10:52.687 04:27:49 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:10:52.945 Initializing NVMe Controllers 00:10:52.945 Attaching to 0000:00:10.0 00:10:52.945 Attaching to 0000:00:11.0 00:10:52.945 Attached to 0000:00:11.0 00:10:52.945 Attached to 0000:00:10.0 00:10:52.945 Initialization complete. Starting I/O... 00:10:52.945 QEMU NVMe Ctrl (12341 ): 0 I/Os completed (+0) 00:10:52.945 QEMU NVMe Ctrl (12340 ): 0 I/Os completed (+0) 00:10:52.945 00:10:53.880 QEMU NVMe Ctrl (12341 ): 2673 I/Os completed (+2673) 00:10:53.880 QEMU NVMe Ctrl (12340 ): 2743 I/Os completed (+2743) 00:10:53.880 00:10:55.258 QEMU NVMe Ctrl (12341 ): 5793 I/Os completed (+3120) 00:10:55.258 QEMU NVMe Ctrl (12340 ): 5904 I/Os completed (+3161) 00:10:55.258 00:10:55.828 QEMU NVMe Ctrl (12341 ): 8837 I/Os completed (+3044) 00:10:55.828 QEMU NVMe Ctrl (12340 ): 9112 I/Os completed (+3208) 00:10:55.828 00:10:57.215 QEMU NVMe Ctrl (12341 ): 11820 I/Os completed (+2983) 00:10:57.215 QEMU NVMe Ctrl (12340 ): 12172 I/Os completed (+3060) 00:10:57.215 00:10:58.155 QEMU NVMe Ctrl (12341 ): 15208 I/Os completed (+3388) 00:10:58.155 QEMU NVMe Ctrl (12340 ): 15616 I/Os completed (+3444) 00:10:58.155 00:10:58.722 04:27:55 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:58.722 04:27:55 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:58.722 04:27:55 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:58.722 [2024-11-27 04:27:55.218487] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:10:58.722 Controller removed: QEMU NVMe Ctrl (12340 ) 00:10:58.722 [2024-11-27 04:27:55.220293] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:58.722 [2024-11-27 04:27:55.220359] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:58.722 [2024-11-27 04:27:55.220384] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:58.722 [2024-11-27 04:27:55.220421] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:58.722 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:10:58.722 [2024-11-27 04:27:55.222361] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:58.722 [2024-11-27 04:27:55.222415] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:58.722 [2024-11-27 04:27:55.222429] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:58.722 [2024-11-27 04:27:55.222444] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:58.722 04:27:55 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:58.722 04:27:55 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:58.722 [2024-11-27 04:27:55.241809] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:10:58.722 Controller removed: QEMU NVMe Ctrl (12341 ) 00:10:58.722 [2024-11-27 04:27:55.242975] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:58.722 [2024-11-27 04:27:55.243026] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:58.722 [2024-11-27 04:27:55.243049] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:58.722 [2024-11-27 04:27:55.243064] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:58.722 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:10:58.722 [2024-11-27 04:27:55.244851] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:58.722 [2024-11-27 04:27:55.244898] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:58.722 [2024-11-27 04:27:55.244917] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:58.722 [2024-11-27 04:27:55.244941] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:58.722 04:27:55 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:10:58.722 04:27:55 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:58.981 04:27:55 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:58.981 04:27:55 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:58.981 04:27:55 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:58.981 04:27:55 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:58.981 04:27:55 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:58.981 04:27:55 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:58.981 04:27:55 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:58.981 04:27:55 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:58.981 Attaching to 0000:00:10.0 00:10:58.981 Attached to 0000:00:10.0 00:10:58.981 QEMU NVMe Ctrl (12340 ): 64 I/Os completed (+64) 00:10:58.981 00:10:58.981 04:27:55 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:58.981 04:27:55 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:58.981 04:27:55 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:10:58.981 Attaching to 0000:00:11.0 00:10:58.981 Attached to 0000:00:11.0 00:10:59.923 QEMU NVMe Ctrl (12340 ): 3521 I/Os completed (+3457) 00:10:59.923 QEMU NVMe Ctrl (12341 ): 3262 I/Os completed (+3262) 00:10:59.923 00:11:00.856 QEMU NVMe Ctrl (12340 ): 6833 I/Os completed (+3312) 00:11:00.856 QEMU NVMe Ctrl (12341 ): 6612 I/Os completed (+3350) 00:11:00.856 00:11:02.234 QEMU NVMe Ctrl (12340 ): 10250 I/Os completed (+3417) 00:11:02.234 QEMU NVMe Ctrl (12341 ): 10040 I/Os completed (+3428) 00:11:02.234 00:11:03.178 QEMU NVMe Ctrl (12340 ): 13565 I/Os completed (+3315) 00:11:03.178 QEMU NVMe Ctrl (12341 ): 13447 I/Os completed (+3407) 00:11:03.178 00:11:04.116 QEMU NVMe Ctrl (12340 ): 16763 I/Os completed (+3198) 00:11:04.116 QEMU NVMe Ctrl (12341 ): 16601 I/Os completed (+3154) 00:11:04.116 00:11:05.050 QEMU NVMe Ctrl (12340 ): 20069 I/Os completed (+3306) 00:11:05.050 QEMU NVMe Ctrl (12341 ): 20013 I/Os completed (+3412) 00:11:05.050 00:11:05.990 QEMU NVMe Ctrl (12340 ): 23696 I/Os completed (+3627) 00:11:05.990 QEMU NVMe Ctrl (12341 ): 23770 I/Os completed (+3757) 00:11:05.990 00:11:06.927 QEMU NVMe Ctrl (12340 ): 27296 I/Os completed (+3600) 00:11:06.927 QEMU NVMe Ctrl (12341 ): 27369 I/Os completed (+3599) 00:11:06.927 00:11:07.864 QEMU NVMe Ctrl (12340 ): 30546 I/Os completed (+3250) 00:11:07.864 QEMU NVMe Ctrl (12341 ): 30534 I/Os completed (+3165) 00:11:07.864 00:11:09.243 QEMU NVMe Ctrl (12340 ): 33615 I/Os completed (+3069) 00:11:09.243 QEMU NVMe Ctrl (12341 ): 33672 I/Os completed (+3138) 00:11:09.243 00:11:10.177 QEMU NVMe Ctrl (12340 ): 36809 I/Os completed (+3194) 00:11:10.177 QEMU NVMe Ctrl (12341 ): 36861 I/Os completed (+3189) 00:11:10.177 00:11:11.118 QEMU NVMe Ctrl (12340 ): 40066 I/Os completed (+3257) 00:11:11.118 QEMU NVMe Ctrl (12341 ): 40350 I/Os completed (+3489) 00:11:11.118 00:11:11.118 04:28:07 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:11:11.118 04:28:07 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:11.118 04:28:07 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:11.118 04:28:07 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:11.118 [2024-11-27 04:28:07.468038] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:11.118 Controller removed: QEMU NVMe Ctrl (12340 ) 00:11:11.118 [2024-11-27 04:28:07.469040] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:11.118 [2024-11-27 04:28:07.469104] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:11.118 [2024-11-27 04:28:07.469120] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:11.118 [2024-11-27 04:28:07.469136] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:11.118 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:11:11.118 [2024-11-27 04:28:07.470770] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:11.118 [2024-11-27 04:28:07.470812] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:11.118 [2024-11-27 04:28:07.470824] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:11.118 [2024-11-27 04:28:07.470837] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:11.118 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:10.0/device 00:11:11.118 EAL: Scan for (pci) bus failed. 00:11:11.119 04:28:07 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:11.119 04:28:07 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:11.119 [2024-11-27 04:28:07.489331] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:11.119 Controller removed: QEMU NVMe Ctrl (12341 ) 00:11:11.119 [2024-11-27 04:28:07.490240] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:11.119 [2024-11-27 04:28:07.490279] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:11.119 [2024-11-27 04:28:07.490296] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:11.119 [2024-11-27 04:28:07.490309] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:11.119 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:11:11.119 [2024-11-27 04:28:07.491701] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:11.119 [2024-11-27 04:28:07.491744] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:11.119 [2024-11-27 04:28:07.491758] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:11.119 [2024-11-27 04:28:07.491770] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:11.119 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:11:11.119 EAL: Scan for (pci) bus failed. 00:11:11.119 04:28:07 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:11:11.119 04:28:07 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:11.119 04:28:07 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:11.119 04:28:07 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:11.119 04:28:07 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:11.119 04:28:07 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:11.119 04:28:07 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:11.119 04:28:07 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:11.119 04:28:07 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:11.119 04:28:07 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:11.119 Attaching to 0000:00:10.0 00:11:11.119 Attached to 0000:00:10.0 00:11:11.378 04:28:07 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:11.378 04:28:07 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:11.378 04:28:07 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:11.378 Attaching to 0000:00:11.0 00:11:11.378 Attached to 0000:00:11.0 00:11:11.950 QEMU NVMe Ctrl (12340 ): 2806 I/Os completed (+2806) 00:11:11.950 QEMU NVMe Ctrl (12341 ): 2531 I/Os completed (+2531) 00:11:11.950 00:11:12.886 QEMU NVMe Ctrl (12340 ): 6053 I/Os completed (+3247) 00:11:12.886 QEMU NVMe Ctrl (12341 ): 5845 I/Os completed (+3314) 00:11:12.886 00:11:14.258 QEMU NVMe Ctrl (12340 ): 9208 I/Os completed (+3155) 00:11:14.258 QEMU NVMe Ctrl (12341 ): 9068 I/Os completed (+3223) 00:11:14.258 00:11:15.187 QEMU NVMe Ctrl (12340 ): 12506 I/Os completed (+3298) 00:11:15.187 QEMU NVMe Ctrl (12341 ): 12477 I/Os completed (+3409) 00:11:15.187 00:11:16.127 QEMU NVMe Ctrl (12340 ): 15699 I/Os completed (+3193) 00:11:16.127 QEMU NVMe Ctrl (12341 ): 15723 I/Os completed (+3246) 00:11:16.127 00:11:17.068 QEMU NVMe Ctrl (12340 ): 18858 I/Os completed (+3159) 00:11:17.068 QEMU NVMe Ctrl (12341 ): 18966 I/Os completed (+3243) 00:11:17.068 00:11:18.007 QEMU NVMe Ctrl (12340 ): 21895 I/Os completed (+3037) 00:11:18.007 QEMU NVMe Ctrl (12341 ): 22055 I/Os completed (+3089) 00:11:18.007 00:11:18.944 QEMU NVMe Ctrl (12340 ): 25113 I/Os completed (+3218) 00:11:18.944 QEMU NVMe Ctrl (12341 ): 25283 I/Os completed (+3228) 00:11:18.944 00:11:19.881 QEMU NVMe Ctrl (12340 ): 28436 I/Os completed (+3323) 00:11:19.881 QEMU NVMe Ctrl (12341 ): 28643 I/Os completed (+3360) 00:11:19.881 00:11:21.264 QEMU NVMe Ctrl (12340 ): 31743 I/Os completed (+3307) 00:11:21.264 QEMU NVMe Ctrl (12341 ): 31953 I/Os completed (+3310) 00:11:21.264 00:11:21.837 QEMU NVMe Ctrl (12340 ): 35180 I/Os completed (+3437) 00:11:21.837 QEMU NVMe Ctrl (12341 ): 35382 I/Os completed (+3429) 00:11:21.837 00:11:23.237 QEMU NVMe Ctrl (12340 ): 38452 I/Os completed (+3272) 00:11:23.237 QEMU NVMe Ctrl (12341 ): 38659 I/Os completed (+3277) 00:11:23.237 00:11:23.237 04:28:19 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:11:23.237 04:28:19 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:23.237 04:28:19 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:23.237 04:28:19 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:23.237 [2024-11-27 04:28:19.736067] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:23.237 Controller removed: QEMU NVMe Ctrl (12340 ) 00:11:23.237 [2024-11-27 04:28:19.737398] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:23.237 [2024-11-27 04:28:19.737452] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:23.237 [2024-11-27 04:28:19.737478] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:23.237 [2024-11-27 04:28:19.737501] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:23.237 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:11:23.237 [2024-11-27 04:28:19.739500] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:23.237 [2024-11-27 04:28:19.739548] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:23.237 [2024-11-27 04:28:19.739566] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:23.237 [2024-11-27 04:28:19.739584] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:23.237 04:28:19 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:23.237 04:28:19 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:23.237 [2024-11-27 04:28:19.757098] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:23.237 Controller removed: QEMU NVMe Ctrl (12341 ) 00:11:23.237 [2024-11-27 04:28:19.758071] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:23.237 [2024-11-27 04:28:19.758124] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:23.237 [2024-11-27 04:28:19.758146] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:23.237 [2024-11-27 04:28:19.758167] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:23.237 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:11:23.237 [2024-11-27 04:28:19.759622] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:23.237 [2024-11-27 04:28:19.759662] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:23.237 [2024-11-27 04:28:19.759683] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:23.237 [2024-11-27 04:28:19.759699] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:23.237 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:11:23.237 04:28:19 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:11:23.237 EAL: Scan for (pci) bus failed. 00:11:23.237 04:28:19 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:23.498 04:28:19 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:23.498 04:28:19 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:23.498 04:28:19 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:23.498 04:28:19 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:23.498 04:28:19 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:23.498 04:28:19 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:23.498 04:28:19 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:23.498 04:28:19 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:23.498 Attaching to 0000:00:10.0 00:11:23.499 Attached to 0000:00:10.0 00:11:23.499 04:28:19 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:23.499 04:28:19 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:23.499 04:28:19 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:23.499 Attaching to 0000:00:11.0 00:11:23.499 Attached to 0000:00:11.0 00:11:23.499 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:11:23.499 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:11:23.499 [2024-11-27 04:28:19.998024] rpc.c: 409:spdk_rpc_close: *WARNING*: spdk_rpc_close: deprecated feature spdk_rpc_close is deprecated to be removed in v24.09 00:11:35.743 04:28:31 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:11:35.743 04:28:31 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:35.743 04:28:31 sw_hotplug -- common/autotest_common.sh@719 -- # time=42.78 00:11:35.743 04:28:31 sw_hotplug -- common/autotest_common.sh@720 -- # echo 42.78 00:11:35.743 04:28:31 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:11:35.743 04:28:31 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=42.78 00:11:35.743 04:28:31 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 42.78 2 00:11:35.743 remove_attach_helper took 42.78s to complete (handling 2 nvme drive(s)) 04:28:31 sw_hotplug -- nvme/sw_hotplug.sh@91 -- # sleep 6 00:11:42.343 04:28:37 sw_hotplug -- nvme/sw_hotplug.sh@93 -- # kill -0 66699 00:11:42.343 /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh: line 93: kill: (66699) - No such process 00:11:42.343 04:28:37 sw_hotplug -- nvme/sw_hotplug.sh@95 -- # wait 66699 00:11:42.343 04:28:37 sw_hotplug -- nvme/sw_hotplug.sh@102 -- # trap - SIGINT SIGTERM EXIT 00:11:42.343 04:28:37 sw_hotplug -- nvme/sw_hotplug.sh@151 -- # tgt_run_hotplug 00:11:42.343 04:28:37 sw_hotplug -- nvme/sw_hotplug.sh@107 -- # local dev 00:11:42.343 04:28:37 sw_hotplug -- nvme/sw_hotplug.sh@110 -- # spdk_tgt_pid=67243 00:11:42.343 04:28:37 sw_hotplug -- nvme/sw_hotplug.sh@112 -- # trap 'killprocess ${spdk_tgt_pid}; echo 1 > /sys/bus/pci/rescan; exit 1' SIGINT SIGTERM EXIT 00:11:42.343 04:28:37 sw_hotplug -- nvme/sw_hotplug.sh@113 -- # waitforlisten 67243 00:11:42.343 04:28:37 sw_hotplug -- common/autotest_common.sh@835 -- # '[' -z 67243 ']' 00:11:42.343 04:28:38 sw_hotplug -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:42.343 04:28:38 sw_hotplug -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:42.343 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:42.343 04:28:38 sw_hotplug -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:42.343 04:28:37 sw_hotplug -- nvme/sw_hotplug.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:11:42.343 04:28:38 sw_hotplug -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:42.343 04:28:38 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:42.343 [2024-11-27 04:28:38.074349] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:11:42.343 [2024-11-27 04:28:38.074460] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67243 ] 00:11:42.343 [2024-11-27 04:28:38.229518] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:42.343 [2024-11-27 04:28:38.330990] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:42.343 04:28:38 sw_hotplug -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:42.343 04:28:38 sw_hotplug -- common/autotest_common.sh@868 -- # return 0 00:11:42.343 04:28:38 sw_hotplug -- nvme/sw_hotplug.sh@115 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:11:42.343 04:28:38 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.343 04:28:38 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:42.601 04:28:38 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.601 04:28:38 sw_hotplug -- nvme/sw_hotplug.sh@117 -- # debug_remove_attach_helper 3 6 true 00:11:42.601 04:28:38 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:11:42.601 04:28:38 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:11:42.601 04:28:38 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:11:42.601 04:28:38 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:11:42.601 04:28:38 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:11:42.601 04:28:38 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:11:42.601 04:28:38 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 true 00:11:42.601 04:28:38 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:11:42.601 04:28:38 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:11:42.601 04:28:38 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:11:42.601 04:28:38 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:11:42.601 04:28:38 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:11:49.244 04:28:44 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:49.244 04:28:44 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:49.244 04:28:44 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:49.244 04:28:44 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:49.244 04:28:44 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:49.244 04:28:44 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:49.244 04:28:44 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:49.244 04:28:44 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:49.244 04:28:44 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:49.244 04:28:44 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:49.244 04:28:44 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:49.244 04:28:44 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.244 04:28:44 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:49.244 04:28:45 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.244 04:28:45 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:11:49.244 04:28:45 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:49.244 [2024-11-27 04:28:45.029901] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:49.244 [2024-11-27 04:28:45.031316] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:49.244 [2024-11-27 04:28:45.031357] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:49.244 [2024-11-27 04:28:45.031371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:49.244 [2024-11-27 04:28:45.031391] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:49.244 [2024-11-27 04:28:45.031399] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:49.244 [2024-11-27 04:28:45.031408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:49.244 [2024-11-27 04:28:45.031416] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:49.244 [2024-11-27 04:28:45.031424] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:49.244 [2024-11-27 04:28:45.031430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:49.244 [2024-11-27 04:28:45.031442] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:49.244 [2024-11-27 04:28:45.031449] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:49.244 [2024-11-27 04:28:45.031457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:49.244 [2024-11-27 04:28:45.429898] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:49.244 [2024-11-27 04:28:45.431413] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:49.244 [2024-11-27 04:28:45.431456] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:49.244 [2024-11-27 04:28:45.431470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:49.244 [2024-11-27 04:28:45.431489] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:49.244 [2024-11-27 04:28:45.431499] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:49.244 [2024-11-27 04:28:45.431506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:49.244 [2024-11-27 04:28:45.431515] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:49.244 [2024-11-27 04:28:45.431522] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:49.244 [2024-11-27 04:28:45.431530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:49.244 [2024-11-27 04:28:45.431538] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:49.245 [2024-11-27 04:28:45.431546] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:49.245 [2024-11-27 04:28:45.431552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:49.245 04:28:45 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:11:49.245 04:28:45 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:49.245 04:28:45 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:49.245 04:28:45 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:49.245 04:28:45 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:49.245 04:28:45 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:49.245 04:28:45 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.245 04:28:45 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:49.245 04:28:45 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.245 04:28:45 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:49.245 04:28:45 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:49.245 04:28:45 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:49.245 04:28:45 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:49.245 04:28:45 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:49.245 04:28:45 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:49.245 04:28:45 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:49.245 04:28:45 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:49.245 04:28:45 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:49.245 04:28:45 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:49.245 04:28:45 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:49.245 04:28:45 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:49.245 04:28:45 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:01.444 04:28:57 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:12:01.444 04:28:57 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:12:01.444 04:28:57 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:12:01.444 04:28:57 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:01.444 04:28:57 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:01.444 04:28:57 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:01.444 04:28:57 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.444 04:28:57 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:01.444 04:28:57 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.444 04:28:57 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:01.444 04:28:57 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:01.444 04:28:57 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:01.444 04:28:57 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:01.444 [2024-11-27 04:28:57.830070] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:12:01.444 [2024-11-27 04:28:57.831654] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:01.444 [2024-11-27 04:28:57.831694] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:01.444 [2024-11-27 04:28:57.831705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:01.444 [2024-11-27 04:28:57.831737] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:01.444 [2024-11-27 04:28:57.831745] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:01.444 [2024-11-27 04:28:57.831754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:01.444 [2024-11-27 04:28:57.831762] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:01.444 [2024-11-27 04:28:57.831770] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:01.444 [2024-11-27 04:28:57.831777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:01.444 [2024-11-27 04:28:57.831785] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:01.444 [2024-11-27 04:28:57.831792] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:01.444 [2024-11-27 04:28:57.831800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:01.444 04:28:57 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:01.445 04:28:57 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:01.445 04:28:57 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:12:01.445 04:28:57 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:01.445 04:28:57 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:01.445 04:28:57 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:01.445 04:28:57 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:01.445 04:28:57 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:01.445 04:28:57 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.445 04:28:57 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:01.445 04:28:57 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.445 04:28:57 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:12:01.445 04:28:57 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:12:01.703 [2024-11-27 04:28:58.230079] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:12:01.703 [2024-11-27 04:28:58.231454] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:01.703 [2024-11-27 04:28:58.231489] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:01.703 [2024-11-27 04:28:58.231504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:01.703 [2024-11-27 04:28:58.231520] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:01.703 [2024-11-27 04:28:58.231531] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:01.703 [2024-11-27 04:28:58.231538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:01.703 [2024-11-27 04:28:58.231547] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:01.703 [2024-11-27 04:28:58.231553] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:01.703 [2024-11-27 04:28:58.231562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:01.703 [2024-11-27 04:28:58.231569] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:01.703 [2024-11-27 04:28:58.231577] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:01.703 [2024-11-27 04:28:58.231584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:01.961 04:28:58 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:12:01.961 04:28:58 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:01.961 04:28:58 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:01.961 04:28:58 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:01.961 04:28:58 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:01.961 04:28:58 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:01.961 04:28:58 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.961 04:28:58 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:01.961 04:28:58 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.961 04:28:58 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:12:01.961 04:28:58 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:12:01.961 04:28:58 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:01.961 04:28:58 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:01.961 04:28:58 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:12:02.219 04:28:58 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:12:02.219 04:28:58 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:02.219 04:28:58 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:02.219 04:28:58 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:02.219 04:28:58 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:12:02.219 04:28:58 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:12:02.219 04:28:58 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:02.219 04:28:58 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:14.460 04:29:10 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:12:14.460 04:29:10 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:12:14.460 04:29:10 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:12:14.460 04:29:10 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:14.460 04:29:10 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:14.460 04:29:10 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.460 04:29:10 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:14.460 04:29:10 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:14.460 04:29:10 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.460 04:29:10 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:14.460 04:29:10 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:14.460 04:29:10 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:14.460 04:29:10 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:14.460 04:29:10 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:14.460 04:29:10 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:14.460 04:29:10 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:12:14.460 04:29:10 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:14.460 04:29:10 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:14.460 04:29:10 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:14.460 04:29:10 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:14.460 04:29:10 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:14.460 04:29:10 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.460 04:29:10 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:14.460 04:29:10 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.460 [2024-11-27 04:29:10.730280] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:12:14.460 [2024-11-27 04:29:10.731713] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:14.460 [2024-11-27 04:29:10.731760] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:14.460 [2024-11-27 04:29:10.731772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:14.460 [2024-11-27 04:29:10.731790] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:14.460 [2024-11-27 04:29:10.731798] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:14.460 [2024-11-27 04:29:10.731808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:14.460 [2024-11-27 04:29:10.731816] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:14.460 [2024-11-27 04:29:10.731824] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:14.460 [2024-11-27 04:29:10.731831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:14.460 [2024-11-27 04:29:10.731840] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:14.460 [2024-11-27 04:29:10.731847] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:14.461 [2024-11-27 04:29:10.731855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:14.461 04:29:10 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:12:14.461 04:29:10 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:12:14.719 [2024-11-27 04:29:11.130287] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:12:14.719 [2024-11-27 04:29:11.131670] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:14.719 [2024-11-27 04:29:11.131706] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:14.719 [2024-11-27 04:29:11.131718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:14.719 [2024-11-27 04:29:11.131744] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:14.719 [2024-11-27 04:29:11.131754] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:14.719 [2024-11-27 04:29:11.131761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:14.719 [2024-11-27 04:29:11.131769] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:14.719 [2024-11-27 04:29:11.131776] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:14.719 [2024-11-27 04:29:11.131786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:14.719 [2024-11-27 04:29:11.131792] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:14.719 [2024-11-27 04:29:11.131800] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:14.719 [2024-11-27 04:29:11.131807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:14.719 04:29:11 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:12:14.719 04:29:11 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:14.719 04:29:11 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:14.719 04:29:11 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:14.719 04:29:11 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:14.719 04:29:11 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.719 04:29:11 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:14.719 04:29:11 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:14.719 04:29:11 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.719 04:29:11 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:12:14.719 04:29:11 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:12:14.977 04:29:11 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:14.977 04:29:11 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:14.977 04:29:11 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:12:14.977 04:29:11 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:12:14.977 04:29:11 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:14.977 04:29:11 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:14.977 04:29:11 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:14.977 04:29:11 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:12:14.977 04:29:11 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:12:14.977 04:29:11 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:14.977 04:29:11 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:27.190 04:29:23 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:12:27.190 04:29:23 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:12:27.190 04:29:23 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:12:27.190 04:29:23 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:27.190 04:29:23 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:27.190 04:29:23 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:27.190 04:29:23 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.190 04:29:23 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:27.190 04:29:23 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.190 04:29:23 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:27.190 04:29:23 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:27.190 04:29:23 sw_hotplug -- common/autotest_common.sh@719 -- # time=44.61 00:12:27.190 04:29:23 sw_hotplug -- common/autotest_common.sh@720 -- # echo 44.61 00:12:27.190 04:29:23 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:12:27.190 04:29:23 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=44.61 00:12:27.190 04:29:23 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 44.61 2 00:12:27.190 remove_attach_helper took 44.61s to complete (handling 2 nvme drive(s)) 04:29:23 sw_hotplug -- nvme/sw_hotplug.sh@119 -- # rpc_cmd bdev_nvme_set_hotplug -d 00:12:27.190 04:29:23 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.190 04:29:23 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:27.190 04:29:23 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.190 04:29:23 sw_hotplug -- nvme/sw_hotplug.sh@120 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:12:27.190 04:29:23 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.190 04:29:23 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:27.190 04:29:23 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.190 04:29:23 sw_hotplug -- nvme/sw_hotplug.sh@122 -- # debug_remove_attach_helper 3 6 true 00:12:27.190 04:29:23 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:12:27.190 04:29:23 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:12:27.190 04:29:23 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:12:27.190 04:29:23 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:12:27.190 04:29:23 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:12:27.190 04:29:23 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:12:27.190 04:29:23 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 true 00:12:27.190 04:29:23 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:12:27.190 04:29:23 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:12:27.190 04:29:23 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:12:27.191 04:29:23 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:12:27.191 04:29:23 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:12:33.746 04:29:29 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:33.746 04:29:29 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:33.746 04:29:29 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:33.746 04:29:29 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:33.746 04:29:29 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:33.746 04:29:29 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:12:33.746 04:29:29 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:33.746 04:29:29 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:33.746 04:29:29 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:33.746 04:29:29 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:33.746 04:29:29 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:33.746 04:29:29 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.746 04:29:29 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:33.746 04:29:29 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.746 04:29:29 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:12:33.746 04:29:29 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:12:33.746 [2024-11-27 04:29:29.673510] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:12:33.746 [2024-11-27 04:29:29.674559] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:33.746 [2024-11-27 04:29:29.674592] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:33.746 [2024-11-27 04:29:29.674604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:33.746 [2024-11-27 04:29:29.674622] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:33.746 [2024-11-27 04:29:29.674630] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:33.746 [2024-11-27 04:29:29.674638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:33.746 [2024-11-27 04:29:29.674646] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:33.746 [2024-11-27 04:29:29.674654] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:33.746 [2024-11-27 04:29:29.674661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:33.746 [2024-11-27 04:29:29.674671] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:33.746 [2024-11-27 04:29:29.674678] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:33.746 [2024-11-27 04:29:29.674688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:33.746 [2024-11-27 04:29:30.073511] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:12:33.746 [2024-11-27 04:29:30.074667] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:33.746 [2024-11-27 04:29:30.074702] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:33.746 [2024-11-27 04:29:30.074714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:33.746 [2024-11-27 04:29:30.074745] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:33.746 [2024-11-27 04:29:30.074755] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:33.746 [2024-11-27 04:29:30.074763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:33.746 [2024-11-27 04:29:30.074772] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:33.746 [2024-11-27 04:29:30.074778] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:33.746 [2024-11-27 04:29:30.074787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:33.746 [2024-11-27 04:29:30.074794] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:33.746 [2024-11-27 04:29:30.074802] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:33.746 [2024-11-27 04:29:30.074809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:33.746 04:29:30 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:12:33.746 04:29:30 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:33.746 04:29:30 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:33.746 04:29:30 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:33.746 04:29:30 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:33.746 04:29:30 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:33.746 04:29:30 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.746 04:29:30 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:33.746 04:29:30 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.746 04:29:30 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:12:33.746 04:29:30 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:12:33.746 04:29:30 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:33.746 04:29:30 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:33.746 04:29:30 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:12:33.746 04:29:30 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:12:34.004 04:29:30 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:34.004 04:29:30 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:34.004 04:29:30 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:34.004 04:29:30 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:12:34.004 04:29:30 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:12:34.004 04:29:30 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:34.004 04:29:30 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:46.230 04:29:42 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:12:46.230 04:29:42 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:12:46.230 04:29:42 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:12:46.230 04:29:42 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:46.230 04:29:42 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:46.230 04:29:42 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:46.230 04:29:42 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.230 04:29:42 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:46.230 04:29:42 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.230 04:29:42 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:46.230 04:29:42 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:46.230 04:29:42 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:46.230 04:29:42 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:46.230 [2024-11-27 04:29:42.473802] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:12:46.230 [2024-11-27 04:29:42.475092] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:46.230 [2024-11-27 04:29:42.475129] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:46.230 [2024-11-27 04:29:42.475141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:46.230 [2024-11-27 04:29:42.475160] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:46.230 [2024-11-27 04:29:42.475167] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:46.230 [2024-11-27 04:29:42.475176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:46.230 [2024-11-27 04:29:42.475184] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:46.230 [2024-11-27 04:29:42.475192] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:46.230 [2024-11-27 04:29:42.475199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:46.230 [2024-11-27 04:29:42.475207] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:46.230 [2024-11-27 04:29:42.475214] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:46.230 [2024-11-27 04:29:42.475222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:46.230 04:29:42 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:46.230 04:29:42 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:46.230 04:29:42 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:12:46.230 04:29:42 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:46.230 04:29:42 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:46.230 04:29:42 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:46.230 04:29:42 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:46.230 04:29:42 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.230 04:29:42 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:46.230 04:29:42 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:46.230 04:29:42 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.230 04:29:42 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:12:46.230 04:29:42 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:12:46.488 [2024-11-27 04:29:42.873799] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:12:46.488 [2024-11-27 04:29:42.874873] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:46.488 [2024-11-27 04:29:42.874905] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:46.488 [2024-11-27 04:29:42.874919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:46.488 [2024-11-27 04:29:42.874935] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:46.488 [2024-11-27 04:29:42.874947] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:46.488 [2024-11-27 04:29:42.874954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:46.488 [2024-11-27 04:29:42.874963] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:46.488 [2024-11-27 04:29:42.874970] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:46.488 [2024-11-27 04:29:42.874978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:46.488 [2024-11-27 04:29:42.874985] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:46.488 [2024-11-27 04:29:42.874994] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:46.488 [2024-11-27 04:29:42.875000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:46.488 04:29:43 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:12:46.488 04:29:43 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:46.488 04:29:43 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:46.488 04:29:43 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:46.488 04:29:43 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:46.488 04:29:43 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:46.488 04:29:43 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.488 04:29:43 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:46.488 04:29:43 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.488 04:29:43 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:12:46.488 04:29:43 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:12:46.747 04:29:43 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:46.747 04:29:43 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:46.747 04:29:43 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:12:46.747 04:29:43 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:12:46.747 04:29:43 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:46.747 04:29:43 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:46.747 04:29:43 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:46.747 04:29:43 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:12:46.747 04:29:43 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:12:46.747 04:29:43 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:46.747 04:29:43 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:58.940 04:29:55 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:12:58.940 04:29:55 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:12:58.940 04:29:55 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:12:58.940 04:29:55 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:58.940 04:29:55 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:58.940 04:29:55 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:58.940 04:29:55 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.940 04:29:55 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:58.940 04:29:55 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.940 04:29:55 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:58.940 04:29:55 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:58.940 04:29:55 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:58.940 04:29:55 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:58.940 04:29:55 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:58.940 04:29:55 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:58.940 04:29:55 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:12:58.941 04:29:55 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:58.941 04:29:55 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:58.941 04:29:55 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:58.941 04:29:55 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:58.941 04:29:55 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:58.941 04:29:55 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.941 04:29:55 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:58.941 [2024-11-27 04:29:55.374045] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:12:58.941 [2024-11-27 04:29:55.375113] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:58.941 [2024-11-27 04:29:55.375145] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:58.941 [2024-11-27 04:29:55.375156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:58.941 [2024-11-27 04:29:55.375173] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:58.941 [2024-11-27 04:29:55.375181] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:58.941 [2024-11-27 04:29:55.375191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:58.941 [2024-11-27 04:29:55.375199] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:58.941 [2024-11-27 04:29:55.375209] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:58.941 [2024-11-27 04:29:55.375216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:58.941 [2024-11-27 04:29:55.375225] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:58.941 [2024-11-27 04:29:55.375231] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:58.941 [2024-11-27 04:29:55.375240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:58.941 04:29:55 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.941 04:29:55 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:12:58.941 04:29:55 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:12:59.505 04:29:55 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:12:59.505 04:29:55 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:59.505 04:29:55 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:59.505 04:29:55 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:59.505 04:29:55 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:59.505 04:29:55 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:59.505 04:29:55 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.505 04:29:55 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:59.505 04:29:55 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.505 04:29:55 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:12:59.505 04:29:55 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:12:59.505 [2024-11-27 04:29:55.974055] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:12:59.505 [2024-11-27 04:29:55.975161] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:59.505 [2024-11-27 04:29:55.975195] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:59.505 [2024-11-27 04:29:55.975208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:59.505 [2024-11-27 04:29:55.975224] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:59.505 [2024-11-27 04:29:55.975234] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:59.505 [2024-11-27 04:29:55.975241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:59.505 [2024-11-27 04:29:55.975251] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:59.505 [2024-11-27 04:29:55.975258] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:59.505 [2024-11-27 04:29:55.975266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:59.505 [2024-11-27 04:29:55.975274] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:59.505 [2024-11-27 04:29:55.975286] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:59.505 [2024-11-27 04:29:55.975293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:00.140 04:29:56 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:13:00.140 04:29:56 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:13:00.140 04:29:56 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:13:00.140 04:29:56 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:00.140 04:29:56 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:00.140 04:29:56 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:00.140 04:29:56 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.140 04:29:56 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:00.140 04:29:56 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.140 04:29:56 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:13:00.140 04:29:56 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:13:00.140 04:29:56 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:00.140 04:29:56 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:00.140 04:29:56 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:13:00.140 04:29:56 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:13:00.140 04:29:56 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:00.140 04:29:56 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:00.140 04:29:56 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:00.140 04:29:56 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:13:00.140 04:29:56 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:13:00.140 04:29:56 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:00.140 04:29:56 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:13:12.393 04:30:08 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:13:12.393 04:30:08 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:13:12.393 04:30:08 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:13:12.393 04:30:08 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:12.393 04:30:08 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.393 04:30:08 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:12.393 04:30:08 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:12.393 04:30:08 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:12.393 04:30:08 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.393 04:30:08 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:13:12.393 04:30:08 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:13:12.393 04:30:08 sw_hotplug -- common/autotest_common.sh@719 -- # time=45.15 00:13:12.393 04:30:08 sw_hotplug -- common/autotest_common.sh@720 -- # echo 45.15 00:13:12.393 04:30:08 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:13:12.393 04:30:08 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.15 00:13:12.393 04:30:08 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.15 2 00:13:12.393 remove_attach_helper took 45.15s to complete (handling 2 nvme drive(s)) 04:30:08 sw_hotplug -- nvme/sw_hotplug.sh@124 -- # trap - SIGINT SIGTERM EXIT 00:13:12.393 04:30:08 sw_hotplug -- nvme/sw_hotplug.sh@125 -- # killprocess 67243 00:13:12.393 04:30:08 sw_hotplug -- common/autotest_common.sh@954 -- # '[' -z 67243 ']' 00:13:12.393 04:30:08 sw_hotplug -- common/autotest_common.sh@958 -- # kill -0 67243 00:13:12.393 04:30:08 sw_hotplug -- common/autotest_common.sh@959 -- # uname 00:13:12.393 04:30:08 sw_hotplug -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:12.393 04:30:08 sw_hotplug -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67243 00:13:12.393 04:30:08 sw_hotplug -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:12.393 04:30:08 sw_hotplug -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:12.393 killing process with pid 67243 00:13:12.393 04:30:08 sw_hotplug -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67243' 00:13:12.393 04:30:08 sw_hotplug -- common/autotest_common.sh@973 -- # kill 67243 00:13:12.393 04:30:08 sw_hotplug -- common/autotest_common.sh@978 -- # wait 67243 00:13:13.766 04:30:09 sw_hotplug -- nvme/sw_hotplug.sh@154 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:13:13.766 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:14.024 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:13:14.024 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:13:14.281 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:13:14.281 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:13:14.281 00:13:14.281 real 2m28.805s 00:13:14.281 user 1m51.053s 00:13:14.281 sys 0m16.341s 00:13:14.281 04:30:10 sw_hotplug -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:14.281 ************************************ 00:13:14.281 END TEST sw_hotplug 00:13:14.281 04:30:10 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:14.281 ************************************ 00:13:14.281 04:30:10 -- spdk/autotest.sh@243 -- # [[ 1 -eq 1 ]] 00:13:14.281 04:30:10 -- spdk/autotest.sh@244 -- # run_test nvme_xnvme /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:13:14.281 04:30:10 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:14.281 04:30:10 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:14.281 04:30:10 -- common/autotest_common.sh@10 -- # set +x 00:13:14.281 ************************************ 00:13:14.282 START TEST nvme_xnvme 00:13:14.282 ************************************ 00:13:14.282 04:30:10 nvme_xnvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:13:14.282 * Looking for test storage... 00:13:14.282 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:13:14.282 04:30:10 nvme_xnvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:14.282 04:30:10 nvme_xnvme -- common/autotest_common.sh@1693 -- # lcov --version 00:13:14.282 04:30:10 nvme_xnvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:14.543 04:30:10 nvme_xnvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:14.543 04:30:10 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:14.543 04:30:10 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:14.543 04:30:10 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:14.543 04:30:10 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:13:14.543 04:30:10 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:13:14.543 04:30:10 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:13:14.543 04:30:10 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:13:14.543 04:30:10 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:13:14.543 04:30:10 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:13:14.543 04:30:10 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:13:14.543 04:30:10 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:14.543 04:30:10 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:13:14.543 04:30:10 nvme_xnvme -- scripts/common.sh@345 -- # : 1 00:13:14.543 04:30:10 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:14.543 04:30:10 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:14.543 04:30:10 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1 00:13:14.543 04:30:10 nvme_xnvme -- scripts/common.sh@353 -- # local d=1 00:13:14.543 04:30:10 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:14.543 04:30:10 nvme_xnvme -- scripts/common.sh@355 -- # echo 1 00:13:14.543 04:30:10 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:13:14.543 04:30:10 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2 00:13:14.543 04:30:10 nvme_xnvme -- scripts/common.sh@353 -- # local d=2 00:13:14.543 04:30:10 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:14.543 04:30:10 nvme_xnvme -- scripts/common.sh@355 -- # echo 2 00:13:14.543 04:30:10 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:13:14.543 04:30:10 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:14.543 04:30:10 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:14.543 04:30:10 nvme_xnvme -- scripts/common.sh@368 -- # return 0 00:13:14.543 04:30:10 nvme_xnvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:14.543 04:30:10 nvme_xnvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:14.543 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:14.543 --rc genhtml_branch_coverage=1 00:13:14.543 --rc genhtml_function_coverage=1 00:13:14.543 --rc genhtml_legend=1 00:13:14.543 --rc geninfo_all_blocks=1 00:13:14.543 --rc geninfo_unexecuted_blocks=1 00:13:14.543 00:13:14.543 ' 00:13:14.543 04:30:10 nvme_xnvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:14.543 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:14.543 --rc genhtml_branch_coverage=1 00:13:14.543 --rc genhtml_function_coverage=1 00:13:14.543 --rc genhtml_legend=1 00:13:14.543 --rc geninfo_all_blocks=1 00:13:14.543 --rc geninfo_unexecuted_blocks=1 00:13:14.543 00:13:14.543 ' 00:13:14.543 04:30:10 nvme_xnvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:14.543 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:14.543 --rc genhtml_branch_coverage=1 00:13:14.543 --rc genhtml_function_coverage=1 00:13:14.543 --rc genhtml_legend=1 00:13:14.543 --rc geninfo_all_blocks=1 00:13:14.543 --rc geninfo_unexecuted_blocks=1 00:13:14.543 00:13:14.543 ' 00:13:14.543 04:30:10 nvme_xnvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:14.543 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:14.543 --rc genhtml_branch_coverage=1 00:13:14.543 --rc genhtml_function_coverage=1 00:13:14.543 --rc genhtml_legend=1 00:13:14.543 --rc geninfo_all_blocks=1 00:13:14.543 --rc geninfo_unexecuted_blocks=1 00:13:14.543 00:13:14.543 ' 00:13:14.543 04:30:10 nvme_xnvme -- xnvme/common.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/dd/common.sh 00:13:14.543 04:30:10 nvme_xnvme -- dd/common.sh@6 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:13:14.543 04:30:10 nvme_xnvme -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:13:14.543 04:30:10 nvme_xnvme -- common/autotest_common.sh@34 -- # set -e 00:13:14.544 04:30:10 nvme_xnvme -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:13:14.544 04:30:10 nvme_xnvme -- common/autotest_common.sh@36 -- # shopt -s extglob 00:13:14.544 04:30:10 nvme_xnvme -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:13:14.544 04:30:10 nvme_xnvme -- common/autotest_common.sh@39 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:13:14.544 04:30:10 nvme_xnvme -- common/autotest_common.sh@44 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:13:14.544 04:30:10 nvme_xnvme -- common/autotest_common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:13:14.544 04:30:10 nvme_xnvme -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:13:14.544 04:30:10 nvme_xnvme -- common/build_config.sh@2 -- # CONFIG_ASAN=y 00:13:14.544 04:30:10 nvme_xnvme -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:13:14.544 04:30:10 nvme_xnvme -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:13:14.544 04:30:10 nvme_xnvme -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:13:14.544 04:30:10 nvme_xnvme -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:13:14.544 04:30:10 nvme_xnvme -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:13:14.544 04:30:10 nvme_xnvme -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:13:14.544 04:30:10 nvme_xnvme -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:13:14.544 04:30:10 nvme_xnvme -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:13:14.544 04:30:10 nvme_xnvme -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:13:14.544 04:30:10 nvme_xnvme -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:13:14.544 04:30:10 nvme_xnvme -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:13:14.544 04:30:10 nvme_xnvme -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:13:14.544 04:30:10 nvme_xnvme -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:13:14.544 04:30:10 nvme_xnvme -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:13:14.544 04:30:10 nvme_xnvme -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:13:14.544 04:30:10 nvme_xnvme -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:13:14.544 04:30:10 nvme_xnvme -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:13:14.544 04:30:10 nvme_xnvme -- common/build_config.sh@20 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:13:14.544 04:30:10 nvme_xnvme -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:13:14.544 04:30:10 nvme_xnvme -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:13:14.544 04:30:10 nvme_xnvme -- common/build_config.sh@23 -- # CONFIG_CET=n 00:13:14.544 04:30:10 nvme_xnvme -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:13:14.544 04:30:10 nvme_xnvme -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:13:14.544 04:30:10 nvme_xnvme -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:13:14.544 04:30:10 nvme_xnvme -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:13:14.544 04:30:10 nvme_xnvme -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:13:14.544 04:30:10 nvme_xnvme -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:13:14.544 04:30:10 nvme_xnvme -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:13:14.544 04:30:10 nvme_xnvme -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:13:14.544 04:30:10 nvme_xnvme -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:13:14.544 04:30:10 nvme_xnvme -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:13:14.544 04:30:10 nvme_xnvme -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:13:14.544 04:30:10 nvme_xnvme -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:13:14.544 04:30:10 nvme_xnvme -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:13:14.544 04:30:10 nvme_xnvme -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:13:14.544 04:30:10 nvme_xnvme -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:13:14.544 04:30:10 nvme_xnvme -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:13:14.544 04:30:10 nvme_xnvme -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:13:14.544 04:30:10 nvme_xnvme -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:13:14.544 04:30:10 nvme_xnvme -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:13:14.544 04:30:10 nvme_xnvme -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:13:14.544 04:30:10 nvme_xnvme -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:13:14.544 04:30:10 nvme_xnvme -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:13:14.544 04:30:10 nvme_xnvme -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:13:14.544 04:30:10 nvme_xnvme -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:13:14.544 04:30:10 nvme_xnvme -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:13:14.544 04:30:10 nvme_xnvme -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:13:14.544 04:30:10 nvme_xnvme -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:13:14.544 04:30:10 nvme_xnvme -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:13:14.544 04:30:10 nvme_xnvme -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:13:14.544 04:30:10 nvme_xnvme -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:13:14.544 04:30:10 nvme_xnvme -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:13:14.544 04:30:10 nvme_xnvme -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:13:14.544 04:30:10 nvme_xnvme -- common/build_config.sh@56 -- # CONFIG_XNVME=y 00:13:14.544 04:30:10 nvme_xnvme -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=n 00:13:14.544 04:30:10 nvme_xnvme -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:13:14.544 04:30:10 nvme_xnvme -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:13:14.544 04:30:10 nvme_xnvme -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:13:14.544 04:30:10 nvme_xnvme -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:13:14.544 04:30:10 nvme_xnvme -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:13:14.544 04:30:10 nvme_xnvme -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:13:14.544 04:30:10 nvme_xnvme -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:13:14.544 04:30:10 nvme_xnvme -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:13:14.544 04:30:10 nvme_xnvme -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:13:14.544 04:30:10 nvme_xnvme -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:13:14.544 04:30:10 nvme_xnvme -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:13:14.544 04:30:10 nvme_xnvme -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:13:14.544 04:30:10 nvme_xnvme -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:13:14.544 04:30:10 nvme_xnvme -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:13:14.544 04:30:10 nvme_xnvme -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:13:14.544 04:30:10 nvme_xnvme -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:13:14.544 04:30:10 nvme_xnvme -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:13:14.544 04:30:10 nvme_xnvme -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:13:14.544 04:30:10 nvme_xnvme -- common/build_config.sh@76 -- # CONFIG_FC=n 00:13:14.544 04:30:10 nvme_xnvme -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:13:14.544 04:30:10 nvme_xnvme -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:13:14.544 04:30:10 nvme_xnvme -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:13:14.544 04:30:10 nvme_xnvme -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:13:14.544 04:30:10 nvme_xnvme -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:13:14.544 04:30:10 nvme_xnvme -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:13:14.544 04:30:10 nvme_xnvme -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:13:14.544 04:30:10 nvme_xnvme -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:13:14.544 04:30:10 nvme_xnvme -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:13:14.544 04:30:10 nvme_xnvme -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:13:14.544 04:30:10 nvme_xnvme -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:13:14.544 04:30:10 nvme_xnvme -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:13:14.544 04:30:10 nvme_xnvme -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:13:14.544 04:30:10 nvme_xnvme -- common/build_config.sh@90 -- # CONFIG_URING=n 00:13:14.544 04:30:10 nvme_xnvme -- common/autotest_common.sh@54 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:13:14.544 04:30:10 nvme_xnvme -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:13:14.544 04:30:10 nvme_xnvme -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:13:14.544 04:30:10 nvme_xnvme -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:13:14.544 04:30:10 nvme_xnvme -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:13:14.544 04:30:10 nvme_xnvme -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:13:14.544 04:30:10 nvme_xnvme -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:13:14.544 04:30:10 nvme_xnvme -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:13:14.544 04:30:10 nvme_xnvme -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:13:14.544 04:30:10 nvme_xnvme -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:13:14.544 04:30:10 nvme_xnvme -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:13:14.544 04:30:10 nvme_xnvme -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:13:14.544 04:30:10 nvme_xnvme -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:13:14.544 04:30:10 nvme_xnvme -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:13:14.544 04:30:10 nvme_xnvme -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:13:14.544 04:30:10 nvme_xnvme -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:13:14.544 #define SPDK_CONFIG_H 00:13:14.544 #define SPDK_CONFIG_AIO_FSDEV 1 00:13:14.544 #define SPDK_CONFIG_APPS 1 00:13:14.544 #define SPDK_CONFIG_ARCH native 00:13:14.544 #define SPDK_CONFIG_ASAN 1 00:13:14.544 #undef SPDK_CONFIG_AVAHI 00:13:14.544 #undef SPDK_CONFIG_CET 00:13:14.544 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:13:14.544 #define SPDK_CONFIG_COVERAGE 1 00:13:14.544 #define SPDK_CONFIG_CROSS_PREFIX 00:13:14.544 #undef SPDK_CONFIG_CRYPTO 00:13:14.544 #undef SPDK_CONFIG_CRYPTO_MLX5 00:13:14.544 #undef SPDK_CONFIG_CUSTOMOCF 00:13:14.544 #undef SPDK_CONFIG_DAOS 00:13:14.544 #define SPDK_CONFIG_DAOS_DIR 00:13:14.544 #define SPDK_CONFIG_DEBUG 1 00:13:14.544 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:13:14.544 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/spdk/dpdk/build 00:13:14.544 #define SPDK_CONFIG_DPDK_INC_DIR 00:13:14.544 #define SPDK_CONFIG_DPDK_LIB_DIR 00:13:14.544 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:13:14.544 #undef SPDK_CONFIG_DPDK_UADK 00:13:14.544 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:13:14.545 #define SPDK_CONFIG_EXAMPLES 1 00:13:14.545 #undef SPDK_CONFIG_FC 00:13:14.545 #define SPDK_CONFIG_FC_PATH 00:13:14.545 #define SPDK_CONFIG_FIO_PLUGIN 1 00:13:14.545 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:13:14.545 #define SPDK_CONFIG_FSDEV 1 00:13:14.545 #undef SPDK_CONFIG_FUSE 00:13:14.545 #undef SPDK_CONFIG_FUZZER 00:13:14.545 #define SPDK_CONFIG_FUZZER_LIB 00:13:14.545 #undef SPDK_CONFIG_GOLANG 00:13:14.545 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:13:14.545 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:13:14.545 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:13:14.545 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:13:14.545 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:13:14.545 #undef SPDK_CONFIG_HAVE_LIBBSD 00:13:14.545 #undef SPDK_CONFIG_HAVE_LZ4 00:13:14.545 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:13:14.545 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:13:14.545 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:13:14.545 #define SPDK_CONFIG_IDXD 1 00:13:14.545 #define SPDK_CONFIG_IDXD_KERNEL 1 00:13:14.545 #undef SPDK_CONFIG_IPSEC_MB 00:13:14.545 #define SPDK_CONFIG_IPSEC_MB_DIR 00:13:14.545 #define SPDK_CONFIG_ISAL 1 00:13:14.545 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:13:14.545 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:13:14.545 #define SPDK_CONFIG_LIBDIR 00:13:14.545 #undef SPDK_CONFIG_LTO 00:13:14.545 #define SPDK_CONFIG_MAX_LCORES 128 00:13:14.545 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:13:14.545 #define SPDK_CONFIG_NVME_CUSE 1 00:13:14.545 #undef SPDK_CONFIG_OCF 00:13:14.545 #define SPDK_CONFIG_OCF_PATH 00:13:14.545 #define SPDK_CONFIG_OPENSSL_PATH 00:13:14.545 #undef SPDK_CONFIG_PGO_CAPTURE 00:13:14.545 #define SPDK_CONFIG_PGO_DIR 00:13:14.545 #undef SPDK_CONFIG_PGO_USE 00:13:14.545 #define SPDK_CONFIG_PREFIX /usr/local 00:13:14.545 #undef SPDK_CONFIG_RAID5F 00:13:14.545 #undef SPDK_CONFIG_RBD 00:13:14.545 #define SPDK_CONFIG_RDMA 1 00:13:14.545 #define SPDK_CONFIG_RDMA_PROV verbs 00:13:14.545 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:13:14.545 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:13:14.545 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:13:14.545 #define SPDK_CONFIG_SHARED 1 00:13:14.545 #undef SPDK_CONFIG_SMA 00:13:14.545 #define SPDK_CONFIG_TESTS 1 00:13:14.545 #undef SPDK_CONFIG_TSAN 00:13:14.545 #define SPDK_CONFIG_UBLK 1 00:13:14.545 #define SPDK_CONFIG_UBSAN 1 00:13:14.545 #undef SPDK_CONFIG_UNIT_TESTS 00:13:14.545 #undef SPDK_CONFIG_URING 00:13:14.545 #define SPDK_CONFIG_URING_PATH 00:13:14.545 #undef SPDK_CONFIG_URING_ZNS 00:13:14.545 #undef SPDK_CONFIG_USDT 00:13:14.545 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:13:14.545 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:13:14.545 #undef SPDK_CONFIG_VFIO_USER 00:13:14.545 #define SPDK_CONFIG_VFIO_USER_DIR 00:13:14.545 #define SPDK_CONFIG_VHOST 1 00:13:14.545 #define SPDK_CONFIG_VIRTIO 1 00:13:14.545 #undef SPDK_CONFIG_VTUNE 00:13:14.545 #define SPDK_CONFIG_VTUNE_DIR 00:13:14.545 #define SPDK_CONFIG_WERROR 1 00:13:14.545 #define SPDK_CONFIG_WPDK_DIR 00:13:14.545 #define SPDK_CONFIG_XNVME 1 00:13:14.545 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:13:14.545 04:30:10 nvme_xnvme -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:13:14.545 04:30:10 nvme_xnvme -- common/autotest_common.sh@55 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:14.545 04:30:10 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob 00:13:14.545 04:30:10 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:14.545 04:30:10 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:14.545 04:30:10 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:14.545 04:30:10 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:14.545 04:30:10 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:14.545 04:30:10 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:14.545 04:30:10 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:13:14.545 04:30:10 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:14.545 04:30:10 nvme_xnvme -- common/autotest_common.sh@56 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:13:14.545 04:30:10 nvme_xnvme -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:13:14.545 04:30:10 nvme_xnvme -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:13:14.545 04:30:10 nvme_xnvme -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:13:14.545 04:30:10 nvme_xnvme -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:13:14.545 04:30:10 nvme_xnvme -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:13:14.545 04:30:10 nvme_xnvme -- pm/common@64 -- # TEST_TAG=N/A 00:13:14.545 04:30:10 nvme_xnvme -- pm/common@65 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:13:14.545 04:30:10 nvme_xnvme -- pm/common@67 -- # PM_OUTPUTDIR=/home/vagrant/spdk_repo/spdk/../output/power 00:13:14.545 04:30:10 nvme_xnvme -- pm/common@68 -- # uname -s 00:13:14.545 04:30:10 nvme_xnvme -- pm/common@68 -- # PM_OS=Linux 00:13:14.545 04:30:10 nvme_xnvme -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:13:14.545 04:30:10 nvme_xnvme -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:13:14.545 04:30:10 nvme_xnvme -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:13:14.545 04:30:10 nvme_xnvme -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:13:14.545 04:30:10 nvme_xnvme -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:13:14.545 04:30:10 nvme_xnvme -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:13:14.545 04:30:10 nvme_xnvme -- pm/common@76 -- # SUDO[0]= 00:13:14.545 04:30:10 nvme_xnvme -- pm/common@76 -- # SUDO[1]='sudo -E' 00:13:14.545 04:30:10 nvme_xnvme -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:13:14.545 04:30:10 nvme_xnvme -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:13:14.545 04:30:10 nvme_xnvme -- pm/common@81 -- # [[ Linux == Linux ]] 00:13:14.545 04:30:10 nvme_xnvme -- pm/common@81 -- # [[ QEMU != QEMU ]] 00:13:14.545 04:30:10 nvme_xnvme -- pm/common@88 -- # [[ ! -d /home/vagrant/spdk_repo/spdk/../output/power ]] 00:13:14.545 04:30:10 nvme_xnvme -- common/autotest_common.sh@58 -- # : 1 00:13:14.545 04:30:10 nvme_xnvme -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:13:14.545 04:30:10 nvme_xnvme -- common/autotest_common.sh@62 -- # : 0 00:13:14.545 04:30:10 nvme_xnvme -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:13:14.545 04:30:10 nvme_xnvme -- common/autotest_common.sh@64 -- # : 0 00:13:14.545 04:30:10 nvme_xnvme -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:13:14.545 04:30:10 nvme_xnvme -- common/autotest_common.sh@66 -- # : 1 00:13:14.545 04:30:10 nvme_xnvme -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:13:14.545 04:30:10 nvme_xnvme -- common/autotest_common.sh@68 -- # : 0 00:13:14.545 04:30:10 nvme_xnvme -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:13:14.545 04:30:10 nvme_xnvme -- common/autotest_common.sh@70 -- # : 00:13:14.545 04:30:10 nvme_xnvme -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:13:14.545 04:30:10 nvme_xnvme -- common/autotest_common.sh@72 -- # : 0 00:13:14.545 04:30:10 nvme_xnvme -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:13:14.545 04:30:10 nvme_xnvme -- common/autotest_common.sh@74 -- # : 1 00:13:14.545 04:30:10 nvme_xnvme -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:13:14.545 04:30:10 nvme_xnvme -- common/autotest_common.sh@76 -- # : 0 00:13:14.545 04:30:10 nvme_xnvme -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:13:14.545 04:30:10 nvme_xnvme -- common/autotest_common.sh@78 -- # : 0 00:13:14.545 04:30:10 nvme_xnvme -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:13:14.545 04:30:10 nvme_xnvme -- common/autotest_common.sh@80 -- # : 1 00:13:14.545 04:30:10 nvme_xnvme -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:13:14.545 04:30:10 nvme_xnvme -- common/autotest_common.sh@82 -- # : 0 00:13:14.545 04:30:10 nvme_xnvme -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:13:14.545 04:30:10 nvme_xnvme -- common/autotest_common.sh@84 -- # : 0 00:13:14.545 04:30:10 nvme_xnvme -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:13:14.545 04:30:10 nvme_xnvme -- common/autotest_common.sh@86 -- # : 0 00:13:14.545 04:30:10 nvme_xnvme -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:13:14.545 04:30:10 nvme_xnvme -- common/autotest_common.sh@88 -- # : 0 00:13:14.545 04:30:10 nvme_xnvme -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:13:14.545 04:30:10 nvme_xnvme -- common/autotest_common.sh@90 -- # : 1 00:13:14.545 04:30:10 nvme_xnvme -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:13:14.545 04:30:10 nvme_xnvme -- common/autotest_common.sh@92 -- # : 0 00:13:14.545 04:30:10 nvme_xnvme -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:13:14.545 04:30:10 nvme_xnvme -- common/autotest_common.sh@94 -- # : 0 00:13:14.545 04:30:10 nvme_xnvme -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:13:14.545 04:30:10 nvme_xnvme -- common/autotest_common.sh@96 -- # : 0 00:13:14.545 04:30:10 nvme_xnvme -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:13:14.545 04:30:10 nvme_xnvme -- common/autotest_common.sh@98 -- # : 0 00:13:14.545 04:30:10 nvme_xnvme -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:13:14.545 04:30:10 nvme_xnvme -- common/autotest_common.sh@100 -- # : 0 00:13:14.545 04:30:10 nvme_xnvme -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:13:14.545 04:30:10 nvme_xnvme -- common/autotest_common.sh@102 -- # : rdma 00:13:14.546 04:30:10 nvme_xnvme -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:13:14.546 04:30:10 nvme_xnvme -- common/autotest_common.sh@104 -- # : 0 00:13:14.546 04:30:10 nvme_xnvme -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:13:14.546 04:30:10 nvme_xnvme -- common/autotest_common.sh@106 -- # : 0 00:13:14.546 04:30:10 nvme_xnvme -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:13:14.546 04:30:10 nvme_xnvme -- common/autotest_common.sh@108 -- # : 0 00:13:14.546 04:30:10 nvme_xnvme -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:13:14.546 04:30:10 nvme_xnvme -- common/autotest_common.sh@110 -- # : 0 00:13:14.546 04:30:10 nvme_xnvme -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:13:14.546 04:30:10 nvme_xnvme -- common/autotest_common.sh@112 -- # : 0 00:13:14.546 04:30:10 nvme_xnvme -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:13:14.546 04:30:10 nvme_xnvme -- common/autotest_common.sh@114 -- # : 0 00:13:14.546 04:30:10 nvme_xnvme -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:13:14.546 04:30:10 nvme_xnvme -- common/autotest_common.sh@116 -- # : 0 00:13:14.546 04:30:10 nvme_xnvme -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:13:14.546 04:30:10 nvme_xnvme -- common/autotest_common.sh@118 -- # : 0 00:13:14.546 04:30:10 nvme_xnvme -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:13:14.546 04:30:10 nvme_xnvme -- common/autotest_common.sh@120 -- # : 0 00:13:14.546 04:30:10 nvme_xnvme -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:13:14.546 04:30:10 nvme_xnvme -- common/autotest_common.sh@122 -- # : 1 00:13:14.546 04:30:10 nvme_xnvme -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:13:14.546 04:30:10 nvme_xnvme -- common/autotest_common.sh@124 -- # : 1 00:13:14.546 04:30:10 nvme_xnvme -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:13:14.546 04:30:10 nvme_xnvme -- common/autotest_common.sh@126 -- # : 00:13:14.546 04:30:10 nvme_xnvme -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:13:14.546 04:30:10 nvme_xnvme -- common/autotest_common.sh@128 -- # : 0 00:13:14.546 04:30:10 nvme_xnvme -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:13:14.546 04:30:10 nvme_xnvme -- common/autotest_common.sh@130 -- # : 0 00:13:14.546 04:30:10 nvme_xnvme -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:13:14.546 04:30:10 nvme_xnvme -- common/autotest_common.sh@132 -- # : 1 00:13:14.546 04:30:10 nvme_xnvme -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:13:14.546 04:30:10 nvme_xnvme -- common/autotest_common.sh@134 -- # : 0 00:13:14.546 04:30:10 nvme_xnvme -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:13:14.546 04:30:10 nvme_xnvme -- common/autotest_common.sh@136 -- # : 0 00:13:14.546 04:30:10 nvme_xnvme -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:13:14.546 04:30:10 nvme_xnvme -- common/autotest_common.sh@138 -- # : 0 00:13:14.546 04:30:10 nvme_xnvme -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:13:14.546 04:30:10 nvme_xnvme -- common/autotest_common.sh@140 -- # : 00:13:14.546 04:30:10 nvme_xnvme -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:13:14.546 04:30:10 nvme_xnvme -- common/autotest_common.sh@142 -- # : true 00:13:14.546 04:30:10 nvme_xnvme -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:13:14.546 04:30:10 nvme_xnvme -- common/autotest_common.sh@144 -- # : 0 00:13:14.546 04:30:10 nvme_xnvme -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:13:14.546 04:30:10 nvme_xnvme -- common/autotest_common.sh@146 -- # : 0 00:13:14.546 04:30:10 nvme_xnvme -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:13:14.546 04:30:10 nvme_xnvme -- common/autotest_common.sh@148 -- # : 0 00:13:14.546 04:30:10 nvme_xnvme -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:13:14.546 04:30:10 nvme_xnvme -- common/autotest_common.sh@150 -- # : 0 00:13:14.546 04:30:10 nvme_xnvme -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:13:14.546 04:30:10 nvme_xnvme -- common/autotest_common.sh@152 -- # : 0 00:13:14.546 04:30:10 nvme_xnvme -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:13:14.546 04:30:10 nvme_xnvme -- common/autotest_common.sh@154 -- # : 00:13:14.546 04:30:10 nvme_xnvme -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:13:14.546 04:30:10 nvme_xnvme -- common/autotest_common.sh@156 -- # : 0 00:13:14.546 04:30:10 nvme_xnvme -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:13:14.546 04:30:10 nvme_xnvme -- common/autotest_common.sh@158 -- # : 0 00:13:14.546 04:30:10 nvme_xnvme -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:13:14.546 04:30:10 nvme_xnvme -- common/autotest_common.sh@160 -- # : 1 00:13:14.546 04:30:10 nvme_xnvme -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:13:14.546 04:30:10 nvme_xnvme -- common/autotest_common.sh@162 -- # : 0 00:13:14.546 04:30:10 nvme_xnvme -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:13:14.546 04:30:10 nvme_xnvme -- common/autotest_common.sh@164 -- # : 0 00:13:14.546 04:30:10 nvme_xnvme -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:13:14.546 04:30:10 nvme_xnvme -- common/autotest_common.sh@166 -- # : 0 00:13:14.546 04:30:10 nvme_xnvme -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:13:14.546 04:30:10 nvme_xnvme -- common/autotest_common.sh@169 -- # : 00:13:14.546 04:30:10 nvme_xnvme -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:13:14.546 04:30:10 nvme_xnvme -- common/autotest_common.sh@171 -- # : 0 00:13:14.546 04:30:10 nvme_xnvme -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:13:14.546 04:30:10 nvme_xnvme -- common/autotest_common.sh@173 -- # : 0 00:13:14.546 04:30:10 nvme_xnvme -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:13:14.546 04:30:10 nvme_xnvme -- common/autotest_common.sh@175 -- # : 0 00:13:14.546 04:30:10 nvme_xnvme -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:13:14.546 04:30:10 nvme_xnvme -- common/autotest_common.sh@177 -- # : 0 00:13:14.546 04:30:10 nvme_xnvme -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:13:14.546 04:30:10 nvme_xnvme -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:13:14.546 04:30:10 nvme_xnvme -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:13:14.546 04:30:10 nvme_xnvme -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:13:14.546 04:30:10 nvme_xnvme -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:13:14.546 04:30:10 nvme_xnvme -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:13:14.546 04:30:10 nvme_xnvme -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:13:14.546 04:30:10 nvme_xnvme -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:13:14.546 04:30:10 nvme_xnvme -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:13:14.546 04:30:10 nvme_xnvme -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:13:14.546 04:30:10 nvme_xnvme -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:13:14.546 04:30:10 nvme_xnvme -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:13:14.546 04:30:10 nvme_xnvme -- common/autotest_common.sh@191 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:13:14.546 04:30:10 nvme_xnvme -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:13:14.546 04:30:10 nvme_xnvme -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:13:14.546 04:30:10 nvme_xnvme -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:13:14.546 04:30:10 nvme_xnvme -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:13:14.546 04:30:10 nvme_xnvme -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:13:14.546 04:30:10 nvme_xnvme -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:13:14.546 04:30:10 nvme_xnvme -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:13:14.546 04:30:10 nvme_xnvme -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:13:14.546 04:30:10 nvme_xnvme -- common/autotest_common.sh@206 -- # cat 00:13:14.546 04:30:10 nvme_xnvme -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:13:14.546 04:30:10 nvme_xnvme -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:13:14.546 04:30:10 nvme_xnvme -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:13:14.546 04:30:10 nvme_xnvme -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:13:14.546 04:30:10 nvme_xnvme -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:13:14.546 04:30:10 nvme_xnvme -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:13:14.546 04:30:10 nvme_xnvme -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:13:14.546 04:30:10 nvme_xnvme -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:13:14.546 04:30:10 nvme_xnvme -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:13:14.546 04:30:10 nvme_xnvme -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:13:14.546 04:30:10 nvme_xnvme -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:13:14.546 04:30:10 nvme_xnvme -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:13:14.546 04:30:10 nvme_xnvme -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:13:14.546 04:30:10 nvme_xnvme -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:13:14.546 04:30:10 nvme_xnvme -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:13:14.546 04:30:10 nvme_xnvme -- common/autotest_common.sh@262 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:13:14.546 04:30:10 nvme_xnvme -- common/autotest_common.sh@262 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:13:14.546 04:30:10 nvme_xnvme -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:13:14.546 04:30:10 nvme_xnvme -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:13:14.547 04:30:10 nvme_xnvme -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:13:14.547 04:30:10 nvme_xnvme -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:13:14.547 04:30:10 nvme_xnvme -- common/autotest_common.sh@269 -- # _LCOV= 00:13:14.547 04:30:10 nvme_xnvme -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:13:14.547 04:30:10 nvme_xnvme -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:13:14.547 04:30:10 nvme_xnvme -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:13:14.547 04:30:10 nvme_xnvme -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:13:14.547 04:30:10 nvme_xnvme -- common/autotest_common.sh@275 -- # lcov_opt= 00:13:14.547 04:30:10 nvme_xnvme -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:13:14.547 04:30:10 nvme_xnvme -- common/autotest_common.sh@279 -- # export valgrind= 00:13:14.547 04:30:10 nvme_xnvme -- common/autotest_common.sh@279 -- # valgrind= 00:13:14.547 04:30:10 nvme_xnvme -- common/autotest_common.sh@285 -- # uname -s 00:13:14.547 04:30:10 nvme_xnvme -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:13:14.547 04:30:10 nvme_xnvme -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:13:14.547 04:30:10 nvme_xnvme -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:13:14.547 04:30:10 nvme_xnvme -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:13:14.547 04:30:10 nvme_xnvme -- common/autotest_common.sh@289 -- # MAKE=make 00:13:14.547 04:30:10 nvme_xnvme -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j10 00:13:14.547 04:30:10 nvme_xnvme -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:13:14.547 04:30:10 nvme_xnvme -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:13:14.547 04:30:10 nvme_xnvme -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:13:14.547 04:30:10 nvme_xnvme -- common/autotest_common.sh@309 -- # TEST_MODE= 00:13:14.547 04:30:10 nvme_xnvme -- common/autotest_common.sh@331 -- # [[ -z 68600 ]] 00:13:14.547 04:30:10 nvme_xnvme -- common/autotest_common.sh@331 -- # kill -0 68600 00:13:14.547 04:30:10 nvme_xnvme -- common/autotest_common.sh@1678 -- # set_test_storage 2147483648 00:13:14.547 04:30:10 nvme_xnvme -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:13:14.547 04:30:10 nvme_xnvme -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:13:14.547 04:30:10 nvme_xnvme -- common/autotest_common.sh@344 -- # local mount target_dir 00:13:14.547 04:30:10 nvme_xnvme -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:13:14.547 04:30:10 nvme_xnvme -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:13:14.547 04:30:10 nvme_xnvme -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:13:14.547 04:30:10 nvme_xnvme -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:13:14.547 04:30:11 nvme_xnvme -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.cSblA0 00:13:14.547 04:30:11 nvme_xnvme -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:13:14.547 04:30:11 nvme_xnvme -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:13:14.547 04:30:11 nvme_xnvme -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:13:14.547 04:30:11 nvme_xnvme -- common/autotest_common.sh@368 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/nvme/xnvme /tmp/spdk.cSblA0/tests/xnvme /tmp/spdk.cSblA0 00:13:14.547 04:30:11 nvme_xnvme -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:13:14.547 04:30:11 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:13:14.547 04:30:11 nvme_xnvme -- common/autotest_common.sh@340 -- # df -T 00:13:14.547 04:30:11 nvme_xnvme -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:13:14.547 04:30:11 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda5 00:13:14.547 04:30:11 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=btrfs 00:13:14.547 04:30:11 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=13981622272 00:13:14.547 04:30:11 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=20314062848 00:13:14.547 04:30:11 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=5586624512 00:13:14.547 04:30:11 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:13:14.547 04:30:11 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=devtmpfs 00:13:14.547 04:30:11 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:13:14.547 04:30:11 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=4194304 00:13:14.547 04:30:11 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=4194304 00:13:14.547 04:30:11 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:13:14.547 04:30:11 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:13:14.547 04:30:11 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:13:14.547 04:30:11 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:13:14.547 04:30:11 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=6260625408 00:13:14.547 04:30:11 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=6265389056 00:13:14.547 04:30:11 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=4763648 00:13:14.547 04:30:11 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:13:14.547 04:30:11 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:13:14.547 04:30:11 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:13:14.547 04:30:11 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=2493362176 00:13:14.547 04:30:11 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=2506158080 00:13:14.547 04:30:11 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12795904 00:13:14.547 04:30:11 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:13:14.547 04:30:11 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda5 00:13:14.547 04:30:11 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=btrfs 00:13:14.547 04:30:11 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=13981622272 00:13:14.547 04:30:11 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=20314062848 00:13:14.547 04:30:11 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=5586624512 00:13:14.547 04:30:11 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:13:14.547 04:30:11 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:13:14.547 04:30:11 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:13:14.547 04:30:11 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=6265237504 00:13:14.547 04:30:11 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=6265389056 00:13:14.547 04:30:11 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=151552 00:13:14.547 04:30:11 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:13:14.547 04:30:11 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda2 00:13:14.547 04:30:11 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=ext4 00:13:14.547 04:30:11 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=840085504 00:13:14.547 04:30:11 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=1012768768 00:13:14.547 04:30:11 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=103477248 00:13:14.547 04:30:11 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:13:14.547 04:30:11 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda3 00:13:14.547 04:30:11 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=vfat 00:13:14.547 04:30:11 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=91617280 00:13:14.547 04:30:11 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=104607744 00:13:14.547 04:30:11 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12990464 00:13:14.547 04:30:11 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:13:14.547 04:30:11 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:13:14.547 04:30:11 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:13:14.547 04:30:11 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=1253064704 00:13:14.547 04:30:11 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=1253076992 00:13:14.547 04:30:11 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:13:14.547 04:30:11 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:13:14.547 04:30:11 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt/output 00:13:14.547 04:30:11 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=fuse.sshfs 00:13:14.547 04:30:11 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=91260149760 00:13:14.547 04:30:11 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=105088212992 00:13:14.547 04:30:11 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=8442630144 00:13:14.547 04:30:11 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:13:14.547 04:30:11 nvme_xnvme -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:13:14.547 * Looking for test storage... 00:13:14.547 04:30:11 nvme_xnvme -- common/autotest_common.sh@381 -- # local target_space new_size 00:13:14.547 04:30:11 nvme_xnvme -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:13:14.547 04:30:11 nvme_xnvme -- common/autotest_common.sh@385 -- # df /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:13:14.547 04:30:11 nvme_xnvme -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:13:14.547 04:30:11 nvme_xnvme -- common/autotest_common.sh@385 -- # mount=/home 00:13:14.547 04:30:11 nvme_xnvme -- common/autotest_common.sh@387 -- # target_space=13981622272 00:13:14.547 04:30:11 nvme_xnvme -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:13:14.547 04:30:11 nvme_xnvme -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:13:14.547 04:30:11 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ btrfs == tmpfs ]] 00:13:14.547 04:30:11 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ btrfs == ramfs ]] 00:13:14.547 04:30:11 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ /home == / ]] 00:13:14.547 04:30:11 nvme_xnvme -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:13:14.547 04:30:11 nvme_xnvme -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:13:14.547 04:30:11 nvme_xnvme -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:13:14.547 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:13:14.547 04:30:11 nvme_xnvme -- common/autotest_common.sh@402 -- # return 0 00:13:14.547 04:30:11 nvme_xnvme -- common/autotest_common.sh@1680 -- # set -o errtrace 00:13:14.548 04:30:11 nvme_xnvme -- common/autotest_common.sh@1681 -- # shopt -s extdebug 00:13:14.548 04:30:11 nvme_xnvme -- common/autotest_common.sh@1682 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:13:14.548 04:30:11 nvme_xnvme -- common/autotest_common.sh@1684 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:13:14.548 04:30:11 nvme_xnvme -- common/autotest_common.sh@1685 -- # true 00:13:14.548 04:30:11 nvme_xnvme -- common/autotest_common.sh@1687 -- # xtrace_fd 00:13:14.548 04:30:11 nvme_xnvme -- common/autotest_common.sh@25 -- # [[ -n 13 ]] 00:13:14.548 04:30:11 nvme_xnvme -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/13 ]] 00:13:14.548 04:30:11 nvme_xnvme -- common/autotest_common.sh@27 -- # exec 00:13:14.548 04:30:11 nvme_xnvme -- common/autotest_common.sh@29 -- # exec 00:13:14.548 04:30:11 nvme_xnvme -- common/autotest_common.sh@31 -- # xtrace_restore 00:13:14.548 04:30:11 nvme_xnvme -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:13:14.548 04:30:11 nvme_xnvme -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:13:14.548 04:30:11 nvme_xnvme -- common/autotest_common.sh@18 -- # set -x 00:13:14.548 04:30:11 nvme_xnvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:14.548 04:30:11 nvme_xnvme -- common/autotest_common.sh@1693 -- # lcov --version 00:13:14.548 04:30:11 nvme_xnvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:14.548 04:30:11 nvme_xnvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:14.548 04:30:11 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:14.548 04:30:11 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:14.548 04:30:11 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:14.548 04:30:11 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:13:14.548 04:30:11 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:13:14.548 04:30:11 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:13:14.548 04:30:11 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:13:14.548 04:30:11 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:13:14.548 04:30:11 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:13:14.548 04:30:11 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:13:14.548 04:30:11 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:14.548 04:30:11 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:13:14.548 04:30:11 nvme_xnvme -- scripts/common.sh@345 -- # : 1 00:13:14.548 04:30:11 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:14.548 04:30:11 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:14.548 04:30:11 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1 00:13:14.548 04:30:11 nvme_xnvme -- scripts/common.sh@353 -- # local d=1 00:13:14.548 04:30:11 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:14.548 04:30:11 nvme_xnvme -- scripts/common.sh@355 -- # echo 1 00:13:14.548 04:30:11 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:13:14.548 04:30:11 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2 00:13:14.548 04:30:11 nvme_xnvme -- scripts/common.sh@353 -- # local d=2 00:13:14.548 04:30:11 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:14.548 04:30:11 nvme_xnvme -- scripts/common.sh@355 -- # echo 2 00:13:14.548 04:30:11 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:13:14.548 04:30:11 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:14.548 04:30:11 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:14.548 04:30:11 nvme_xnvme -- scripts/common.sh@368 -- # return 0 00:13:14.548 04:30:11 nvme_xnvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:14.548 04:30:11 nvme_xnvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:14.548 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:14.548 --rc genhtml_branch_coverage=1 00:13:14.548 --rc genhtml_function_coverage=1 00:13:14.548 --rc genhtml_legend=1 00:13:14.548 --rc geninfo_all_blocks=1 00:13:14.548 --rc geninfo_unexecuted_blocks=1 00:13:14.548 00:13:14.548 ' 00:13:14.548 04:30:11 nvme_xnvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:14.548 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:14.548 --rc genhtml_branch_coverage=1 00:13:14.548 --rc genhtml_function_coverage=1 00:13:14.548 --rc genhtml_legend=1 00:13:14.548 --rc geninfo_all_blocks=1 00:13:14.548 --rc geninfo_unexecuted_blocks=1 00:13:14.548 00:13:14.548 ' 00:13:14.548 04:30:11 nvme_xnvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:14.548 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:14.548 --rc genhtml_branch_coverage=1 00:13:14.548 --rc genhtml_function_coverage=1 00:13:14.548 --rc genhtml_legend=1 00:13:14.548 --rc geninfo_all_blocks=1 00:13:14.548 --rc geninfo_unexecuted_blocks=1 00:13:14.548 00:13:14.548 ' 00:13:14.548 04:30:11 nvme_xnvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:14.548 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:14.548 --rc genhtml_branch_coverage=1 00:13:14.548 --rc genhtml_function_coverage=1 00:13:14.548 --rc genhtml_legend=1 00:13:14.548 --rc geninfo_all_blocks=1 00:13:14.548 --rc geninfo_unexecuted_blocks=1 00:13:14.548 00:13:14.548 ' 00:13:14.548 04:30:11 nvme_xnvme -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:14.548 04:30:11 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob 00:13:14.548 04:30:11 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:14.548 04:30:11 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:14.548 04:30:11 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:14.548 04:30:11 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:14.548 04:30:11 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:14.548 04:30:11 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:14.548 04:30:11 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:13:14.548 04:30:11 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:14.548 04:30:11 nvme_xnvme -- xnvme/common.sh@12 -- # xnvme_io=('libaio' 'io_uring' 'io_uring_cmd') 00:13:14.548 04:30:11 nvme_xnvme -- xnvme/common.sh@12 -- # declare -a xnvme_io 00:13:14.548 04:30:11 nvme_xnvme -- xnvme/common.sh@18 -- # libaio=('randread' 'randwrite') 00:13:14.548 04:30:11 nvme_xnvme -- xnvme/common.sh@18 -- # declare -a libaio 00:13:14.548 04:30:11 nvme_xnvme -- xnvme/common.sh@23 -- # io_uring=('randread' 'randwrite') 00:13:14.548 04:30:11 nvme_xnvme -- xnvme/common.sh@23 -- # declare -a io_uring 00:13:14.548 04:30:11 nvme_xnvme -- xnvme/common.sh@27 -- # io_uring_cmd=('randread' 'randwrite' 'unmap' 'write_zeroes') 00:13:14.548 04:30:11 nvme_xnvme -- xnvme/common.sh@27 -- # declare -a io_uring_cmd 00:13:14.548 04:30:11 nvme_xnvme -- xnvme/common.sh@33 -- # libaio_fio=('randread' 'randwrite') 00:13:14.548 04:30:11 nvme_xnvme -- xnvme/common.sh@33 -- # declare -a libaio_fio 00:13:14.548 04:30:11 nvme_xnvme -- xnvme/common.sh@37 -- # io_uring_fio=('randread' 'randwrite') 00:13:14.548 04:30:11 nvme_xnvme -- xnvme/common.sh@37 -- # declare -a io_uring_fio 00:13:14.548 04:30:11 nvme_xnvme -- xnvme/common.sh@41 -- # io_uring_cmd_fio=('randread' 'randwrite') 00:13:14.548 04:30:11 nvme_xnvme -- xnvme/common.sh@41 -- # declare -a io_uring_cmd_fio 00:13:14.548 04:30:11 nvme_xnvme -- xnvme/common.sh@45 -- # xnvme_filename=(['libaio']='/dev/nvme0n1' ['io_uring']='/dev/nvme0n1' ['io_uring_cmd']='/dev/ng0n1') 00:13:14.548 04:30:11 nvme_xnvme -- xnvme/common.sh@45 -- # declare -A xnvme_filename 00:13:14.548 04:30:11 nvme_xnvme -- xnvme/common.sh@51 -- # xnvme_conserve_cpu=('false' 'true') 00:13:14.548 04:30:11 nvme_xnvme -- xnvme/common.sh@51 -- # declare -a xnvme_conserve_cpu 00:13:14.548 04:30:11 nvme_xnvme -- xnvme/common.sh@57 -- # method_bdev_xnvme_create_0=(['name']='xnvme_bdev' ['filename']='/dev/nvme0n1' ['io_mechanism']='libaio' ['conserve_cpu']='false') 00:13:14.548 04:30:11 nvme_xnvme -- xnvme/common.sh@57 -- # declare -A method_bdev_xnvme_create_0 00:13:14.549 04:30:11 nvme_xnvme -- xnvme/common.sh@89 -- # prep_nvme 00:13:14.549 04:30:11 nvme_xnvme -- xnvme/common.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:13:14.807 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:15.064 Waiting for block devices as requested 00:13:15.064 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:13:15.064 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:13:15.321 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:13:15.321 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:13:20.603 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:13:20.603 04:30:16 nvme_xnvme -- xnvme/common.sh@73 -- # modprobe -r nvme 00:13:20.863 04:30:17 nvme_xnvme -- xnvme/common.sh@74 -- # nproc 00:13:20.863 04:30:17 nvme_xnvme -- xnvme/common.sh@74 -- # modprobe nvme poll_queues=10 00:13:20.863 04:30:17 nvme_xnvme -- xnvme/common.sh@77 -- # local nvme 00:13:20.863 04:30:17 nvme_xnvme -- xnvme/common.sh@78 -- # for nvme in /dev/nvme*n!(*p*) 00:13:20.863 04:30:17 nvme_xnvme -- xnvme/common.sh@79 -- # block_in_use /dev/nvme0n1 00:13:20.863 04:30:17 nvme_xnvme -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:13:20.863 04:30:17 nvme_xnvme -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:13:20.863 No valid GPT data, bailing 00:13:20.863 04:30:17 nvme_xnvme -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:13:20.863 04:30:17 nvme_xnvme -- scripts/common.sh@394 -- # pt= 00:13:20.863 04:30:17 nvme_xnvme -- scripts/common.sh@395 -- # return 1 00:13:20.863 04:30:17 nvme_xnvme -- xnvme/common.sh@80 -- # xnvme_filename["libaio"]=/dev/nvme0n1 00:13:20.863 04:30:17 nvme_xnvme -- xnvme/common.sh@81 -- # xnvme_filename["io_uring"]=/dev/nvme0n1 00:13:20.863 04:30:17 nvme_xnvme -- xnvme/common.sh@82 -- # xnvme_filename["io_uring_cmd"]=/dev/ng0n1 00:13:20.863 04:30:17 nvme_xnvme -- xnvme/common.sh@83 -- # return 0 00:13:20.863 04:30:17 nvme_xnvme -- xnvme/xnvme.sh@73 -- # trap 'killprocess "$spdk_tgt"' EXIT 00:13:20.863 04:30:17 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:13:20.863 04:30:17 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:13:20.863 04:30:17 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/nvme0n1 00:13:20.863 04:30:17 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/nvme0n1 00:13:20.863 04:30:17 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:13:20.864 04:30:17 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:13:20.864 04:30:17 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:13:20.864 04:30:17 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:13:20.864 04:30:17 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:13:20.864 04:30:17 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:20.864 04:30:17 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:20.864 04:30:17 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:20.864 ************************************ 00:13:20.864 START TEST xnvme_rpc 00:13:20.864 ************************************ 00:13:20.864 04:30:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:13:20.864 04:30:17 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:13:20.864 04:30:17 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:13:20.864 04:30:17 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:13:20.864 04:30:17 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:13:20.864 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:20.864 04:30:17 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=68986 00:13:20.864 04:30:17 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 68986 00:13:20.864 04:30:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 68986 ']' 00:13:20.864 04:30:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:20.864 04:30:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:20.864 04:30:17 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:13:20.864 04:30:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:20.864 04:30:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:20.864 04:30:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:21.125 [2024-11-27 04:30:17.481204] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:13:21.125 [2024-11-27 04:30:17.481788] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68986 ] 00:13:21.125 [2024-11-27 04:30:17.639072] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:21.383 [2024-11-27 04:30:17.738913] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:21.954 04:30:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:21.954 04:30:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:13:21.954 04:30:18 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev libaio '' 00:13:21.954 04:30:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.954 04:30:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:21.954 xnvme_bdev 00:13:21.954 04:30:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.954 04:30:18 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:13:21.954 04:30:18 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:21.954 04:30:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.954 04:30:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:21.954 04:30:18 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:13:21.954 04:30:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.954 04:30:18 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:13:21.954 04:30:18 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:13:21.954 04:30:18 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:21.954 04:30:18 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:13:21.954 04:30:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.954 04:30:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:21.954 04:30:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.954 04:30:18 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:13:21.954 04:30:18 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:13:21.954 04:30:18 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:21.954 04:30:18 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:13:21.954 04:30:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.954 04:30:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:21.954 04:30:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.954 04:30:18 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ libaio == \l\i\b\a\i\o ]] 00:13:21.954 04:30:18 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:13:21.954 04:30:18 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:13:21.954 04:30:18 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:21.954 04:30:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.954 04:30:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:21.954 04:30:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.954 04:30:18 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:13:21.954 04:30:18 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:13:21.954 04:30:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.954 04:30:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:21.954 04:30:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.954 04:30:18 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 68986 00:13:21.954 04:30:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 68986 ']' 00:13:21.954 04:30:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 68986 00:13:21.954 04:30:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:13:21.954 04:30:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:21.954 04:30:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68986 00:13:21.954 04:30:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:21.954 04:30:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:21.954 killing process with pid 68986 00:13:21.954 04:30:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68986' 00:13:21.954 04:30:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 68986 00:13:21.954 04:30:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 68986 00:13:23.865 00:13:23.865 real 0m2.632s 00:13:23.865 user 0m2.717s 00:13:23.865 sys 0m0.358s 00:13:23.865 04:30:20 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:23.865 04:30:20 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:23.865 ************************************ 00:13:23.865 END TEST xnvme_rpc 00:13:23.865 ************************************ 00:13:23.865 04:30:20 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:13:23.865 04:30:20 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:23.865 04:30:20 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:23.865 04:30:20 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:23.865 ************************************ 00:13:23.865 START TEST xnvme_bdevperf 00:13:23.865 ************************************ 00:13:23.865 04:30:20 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:13:23.865 04:30:20 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:13:23.865 04:30:20 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=libaio 00:13:23.865 04:30:20 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:23.865 04:30:20 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:13:23.865 04:30:20 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:13:23.865 04:30:20 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:13:23.865 04:30:20 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:23.865 { 00:13:23.865 "subsystems": [ 00:13:23.865 { 00:13:23.865 "subsystem": "bdev", 00:13:23.865 "config": [ 00:13:23.865 { 00:13:23.865 "params": { 00:13:23.865 "io_mechanism": "libaio", 00:13:23.865 "conserve_cpu": false, 00:13:23.865 "filename": "/dev/nvme0n1", 00:13:23.865 "name": "xnvme_bdev" 00:13:23.865 }, 00:13:23.865 "method": "bdev_xnvme_create" 00:13:23.865 }, 00:13:23.865 { 00:13:23.865 "method": "bdev_wait_for_examine" 00:13:23.865 } 00:13:23.865 ] 00:13:23.865 } 00:13:23.865 ] 00:13:23.865 } 00:13:23.865 [2024-11-27 04:30:20.139713] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:13:23.865 [2024-11-27 04:30:20.139843] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69060 ] 00:13:23.865 [2024-11-27 04:30:20.298906] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:23.865 [2024-11-27 04:30:20.401102] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:24.127 Running I/O for 5 seconds... 00:13:26.450 34973.00 IOPS, 136.61 MiB/s [2024-11-27T04:30:23.979Z] 34326.50 IOPS, 134.09 MiB/s [2024-11-27T04:30:24.921Z] 35107.00 IOPS, 137.14 MiB/s [2024-11-27T04:30:25.863Z] 35136.00 IOPS, 137.25 MiB/s 00:13:29.276 Latency(us) 00:13:29.276 [2024-11-27T04:30:25.863Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:29.276 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:13:29.276 xnvme_bdev : 5.00 35234.07 137.63 0.00 0.00 1811.83 217.40 10183.29 00:13:29.276 [2024-11-27T04:30:25.863Z] =================================================================================================================== 00:13:29.276 [2024-11-27T04:30:25.863Z] Total : 35234.07 137.63 0.00 0.00 1811.83 217.40 10183.29 00:13:29.847 04:30:26 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:29.847 04:30:26 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:13:29.847 04:30:26 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:13:29.847 04:30:26 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:13:29.847 04:30:26 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:30.107 { 00:13:30.107 "subsystems": [ 00:13:30.107 { 00:13:30.107 "subsystem": "bdev", 00:13:30.107 "config": [ 00:13:30.107 { 00:13:30.107 "params": { 00:13:30.107 "io_mechanism": "libaio", 00:13:30.107 "conserve_cpu": false, 00:13:30.107 "filename": "/dev/nvme0n1", 00:13:30.107 "name": "xnvme_bdev" 00:13:30.107 }, 00:13:30.107 "method": "bdev_xnvme_create" 00:13:30.107 }, 00:13:30.107 { 00:13:30.107 "method": "bdev_wait_for_examine" 00:13:30.107 } 00:13:30.107 ] 00:13:30.108 } 00:13:30.108 ] 00:13:30.108 } 00:13:30.108 [2024-11-27 04:30:26.476104] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:13:30.108 [2024-11-27 04:30:26.476222] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69132 ] 00:13:30.108 [2024-11-27 04:30:26.631441] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:30.367 [2024-11-27 04:30:26.730585] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:30.626 Running I/O for 5 seconds... 00:13:32.535 34294.00 IOPS, 133.96 MiB/s [2024-11-27T04:30:30.052Z] 35119.00 IOPS, 137.18 MiB/s [2024-11-27T04:30:31.422Z] 34323.67 IOPS, 134.08 MiB/s [2024-11-27T04:30:32.354Z] 34627.00 IOPS, 135.26 MiB/s 00:13:35.767 Latency(us) 00:13:35.767 [2024-11-27T04:30:32.354Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:35.767 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:13:35.767 xnvme_bdev : 5.00 35053.12 136.93 0.00 0.00 1821.21 93.34 137121.48 00:13:35.767 [2024-11-27T04:30:32.354Z] =================================================================================================================== 00:13:35.767 [2024-11-27T04:30:32.354Z] Total : 35053.12 136.93 0.00 0.00 1821.21 93.34 137121.48 00:13:36.333 00:13:36.333 real 0m12.662s 00:13:36.333 user 0m4.680s 00:13:36.333 sys 0m5.510s 00:13:36.333 04:30:32 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:36.333 04:30:32 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:36.333 ************************************ 00:13:36.333 END TEST xnvme_bdevperf 00:13:36.333 ************************************ 00:13:36.333 04:30:32 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:13:36.333 04:30:32 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:36.333 04:30:32 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:36.333 04:30:32 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:36.333 ************************************ 00:13:36.333 START TEST xnvme_fio_plugin 00:13:36.333 ************************************ 00:13:36.333 04:30:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:13:36.333 04:30:32 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:13:36.333 04:30:32 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=libaio_fio 00:13:36.333 04:30:32 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:36.333 04:30:32 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:36.333 04:30:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:36.333 04:30:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:13:36.333 04:30:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:36.333 04:30:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:13:36.333 04:30:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:36.333 04:30:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:13:36.333 04:30:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:13:36.333 04:30:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:13:36.333 04:30:32 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:13:36.333 04:30:32 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:13:36.333 04:30:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:36.333 04:30:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:13:36.333 04:30:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:36.333 04:30:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:13:36.333 04:30:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:36.333 04:30:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:36.333 04:30:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:13:36.333 04:30:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:36.333 04:30:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:36.333 { 00:13:36.333 "subsystems": [ 00:13:36.333 { 00:13:36.333 "subsystem": "bdev", 00:13:36.333 "config": [ 00:13:36.333 { 00:13:36.333 "params": { 00:13:36.333 "io_mechanism": "libaio", 00:13:36.333 "conserve_cpu": false, 00:13:36.333 "filename": "/dev/nvme0n1", 00:13:36.333 "name": "xnvme_bdev" 00:13:36.333 }, 00:13:36.333 "method": "bdev_xnvme_create" 00:13:36.333 }, 00:13:36.333 { 00:13:36.333 "method": "bdev_wait_for_examine" 00:13:36.333 } 00:13:36.333 ] 00:13:36.333 } 00:13:36.333 ] 00:13:36.333 } 00:13:36.591 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:13:36.591 fio-3.35 00:13:36.591 Starting 1 thread 00:13:43.157 00:13:43.157 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=69249: Wed Nov 27 04:30:38 2024 00:13:43.157 read: IOPS=43.4k, BW=170MiB/s (178MB/s)(848MiB/5001msec) 00:13:43.157 slat (usec): min=3, max=1569, avg=19.49, stdev=27.41 00:13:43.157 clat (usec): min=81, max=6447, avg=875.55, stdev=531.76 00:13:43.157 lat (usec): min=128, max=6460, avg=895.05, stdev=534.89 00:13:43.157 clat percentiles (usec): 00:13:43.157 | 1.00th=[ 176], 5.00th=[ 251], 10.00th=[ 322], 20.00th=[ 445], 00:13:43.157 | 30.00th=[ 553], 40.00th=[ 668], 50.00th=[ 775], 60.00th=[ 889], 00:13:43.157 | 70.00th=[ 1020], 80.00th=[ 1205], 90.00th=[ 1549], 95.00th=[ 1926], 00:13:43.157 | 99.00th=[ 2704], 99.50th=[ 2966], 99.90th=[ 3523], 99.95th=[ 3884], 00:13:43.157 | 99.99th=[ 6259] 00:13:43.157 bw ( KiB/s): min=147672, max=201352, per=100.00%, avg=176138.67, stdev=17734.38, samples=9 00:13:43.157 iops : min=36918, max=50338, avg=44034.67, stdev=4433.59, samples=9 00:13:43.157 lat (usec) : 100=0.01%, 250=4.98%, 500=19.98%, 750=22.81%, 1000=20.58% 00:13:43.157 lat (msec) : 2=27.36%, 4=4.25%, 10=0.04% 00:13:43.157 cpu : usr=28.52%, sys=52.18%, ctx=47, majf=0, minf=764 00:13:43.157 IO depths : 1=0.2%, 2=1.5%, 4=4.6%, 8=11.2%, 16=25.2%, 32=55.5%, >=64=1.8% 00:13:43.157 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:43.157 complete : 0=0.0%, 4=98.2%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.6%, >=64=0.0% 00:13:43.157 issued rwts: total=217060,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:43.157 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:43.157 00:13:43.157 Run status group 0 (all jobs): 00:13:43.157 READ: bw=170MiB/s (178MB/s), 170MiB/s-170MiB/s (178MB/s-178MB/s), io=848MiB (889MB), run=5001-5001msec 00:13:43.157 ----------------------------------------------------- 00:13:43.157 Suppressions used: 00:13:43.157 count bytes template 00:13:43.157 1 11 /usr/src/fio/parse.c 00:13:43.157 1 8 libtcmalloc_minimal.so 00:13:43.157 1 904 libcrypto.so 00:13:43.157 ----------------------------------------------------- 00:13:43.157 00:13:43.157 04:30:39 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:43.157 04:30:39 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:43.157 04:30:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:43.157 04:30:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:13:43.157 04:30:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:43.157 04:30:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:13:43.157 04:30:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:43.157 04:30:39 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:13:43.157 04:30:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:13:43.157 04:30:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:13:43.157 04:30:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:13:43.157 04:30:39 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:13:43.157 04:30:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:43.157 04:30:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:43.157 04:30:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:13:43.157 04:30:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:13:43.157 04:30:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:43.157 04:30:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:43.157 04:30:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:13:43.157 04:30:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:43.158 04:30:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:43.158 { 00:13:43.158 "subsystems": [ 00:13:43.158 { 00:13:43.158 "subsystem": "bdev", 00:13:43.158 "config": [ 00:13:43.158 { 00:13:43.158 "params": { 00:13:43.158 "io_mechanism": "libaio", 00:13:43.158 "conserve_cpu": false, 00:13:43.158 "filename": "/dev/nvme0n1", 00:13:43.158 "name": "xnvme_bdev" 00:13:43.158 }, 00:13:43.158 "method": "bdev_xnvme_create" 00:13:43.158 }, 00:13:43.158 { 00:13:43.158 "method": "bdev_wait_for_examine" 00:13:43.158 } 00:13:43.158 ] 00:13:43.158 } 00:13:43.158 ] 00:13:43.158 } 00:13:43.415 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:13:43.415 fio-3.35 00:13:43.415 Starting 1 thread 00:13:50.002 00:13:50.002 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=69344: Wed Nov 27 04:30:45 2024 00:13:50.002 write: IOPS=35.2k, BW=138MiB/s (144MB/s)(689MiB/5001msec); 0 zone resets 00:13:50.002 slat (usec): min=3, max=5916, avg=22.53, stdev=71.03 00:13:50.002 clat (usec): min=86, max=14640, avg=1175.02, stdev=636.58 00:13:50.002 lat (usec): min=100, max=14644, avg=1197.55, stdev=635.13 00:13:50.002 clat percentiles (usec): 00:13:50.002 | 1.00th=[ 229], 5.00th=[ 355], 10.00th=[ 474], 20.00th=[ 668], 00:13:50.002 | 30.00th=[ 816], 40.00th=[ 947], 50.00th=[ 1074], 60.00th=[ 1221], 00:13:50.002 | 70.00th=[ 1385], 80.00th=[ 1614], 90.00th=[ 1958], 95.00th=[ 2278], 00:13:50.002 | 99.00th=[ 2999], 99.50th=[ 3326], 99.90th=[ 5866], 99.95th=[ 6390], 00:13:50.002 | 99.99th=[12256] 00:13:50.002 bw ( KiB/s): min=116952, max=167216, per=99.61%, avg=140435.56, stdev=13997.48, samples=9 00:13:50.002 iops : min=29238, max=41804, avg=35108.89, stdev=3499.37, samples=9 00:13:50.002 lat (usec) : 100=0.01%, 250=1.48%, 500=9.78%, 750=13.90%, 1000=18.80% 00:13:50.002 lat (msec) : 2=46.78%, 4=9.09%, 10=0.14%, 20=0.02% 00:13:50.002 cpu : usr=35.50%, sys=51.28%, ctx=22, majf=0, minf=764 00:13:50.002 IO depths : 1=0.3%, 2=1.1%, 4=3.8%, 8=10.2%, 16=24.7%, 32=58.0%, >=64=2.0% 00:13:50.002 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:50.002 complete : 0=0.0%, 4=98.1%, 8=0.1%, 16=0.1%, 32=0.2%, 64=1.6%, >=64=0.0% 00:13:50.002 issued rwts: total=0,176270,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:50.002 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:50.002 00:13:50.002 Run status group 0 (all jobs): 00:13:50.002 WRITE: bw=138MiB/s (144MB/s), 138MiB/s-138MiB/s (144MB/s-144MB/s), io=689MiB (722MB), run=5001-5001msec 00:13:50.002 ----------------------------------------------------- 00:13:50.002 Suppressions used: 00:13:50.002 count bytes template 00:13:50.002 1 11 /usr/src/fio/parse.c 00:13:50.002 1 8 libtcmalloc_minimal.so 00:13:50.002 1 904 libcrypto.so 00:13:50.002 ----------------------------------------------------- 00:13:50.002 00:13:50.002 00:13:50.002 real 0m13.740s 00:13:50.002 user 0m6.057s 00:13:50.002 sys 0m5.660s 00:13:50.002 04:30:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:50.002 ************************************ 00:13:50.002 END TEST xnvme_fio_plugin 00:13:50.002 ************************************ 00:13:50.002 04:30:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:50.002 04:30:46 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:13:50.002 04:30:46 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:13:50.002 04:30:46 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:13:50.002 04:30:46 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:13:50.002 04:30:46 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:50.002 04:30:46 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:50.002 04:30:46 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:50.002 ************************************ 00:13:50.002 START TEST xnvme_rpc 00:13:50.002 ************************************ 00:13:50.002 04:30:46 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:13:50.002 04:30:46 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:13:50.002 04:30:46 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:13:50.002 04:30:46 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:13:50.002 04:30:46 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:13:50.264 04:30:46 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=69430 00:13:50.264 04:30:46 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 69430 00:13:50.264 04:30:46 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 69430 ']' 00:13:50.264 04:30:46 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:50.264 04:30:46 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:50.264 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:50.264 04:30:46 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:50.264 04:30:46 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:50.264 04:30:46 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:13:50.264 04:30:46 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:50.264 [2024-11-27 04:30:46.662181] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:13:50.264 [2024-11-27 04:30:46.662312] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69430 ] 00:13:50.264 [2024-11-27 04:30:46.822942] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:50.524 [2024-11-27 04:30:46.925780] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:51.094 04:30:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:51.094 04:30:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:13:51.094 04:30:47 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev libaio -c 00:13:51.094 04:30:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.094 04:30:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:51.094 xnvme_bdev 00:13:51.094 04:30:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.094 04:30:47 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:13:51.094 04:30:47 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:51.094 04:30:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.094 04:30:47 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:13:51.094 04:30:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:51.094 04:30:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.094 04:30:47 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:13:51.094 04:30:47 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:13:51.094 04:30:47 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:51.094 04:30:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.094 04:30:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:51.094 04:30:47 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:13:51.094 04:30:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.355 04:30:47 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:13:51.355 04:30:47 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:13:51.355 04:30:47 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:51.355 04:30:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.355 04:30:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:51.355 04:30:47 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:13:51.355 04:30:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.355 04:30:47 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ libaio == \l\i\b\a\i\o ]] 00:13:51.355 04:30:47 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:13:51.355 04:30:47 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:51.355 04:30:47 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:13:51.355 04:30:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.355 04:30:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:51.355 04:30:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.355 04:30:47 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:13:51.355 04:30:47 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:13:51.355 04:30:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.355 04:30:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:51.355 04:30:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.355 04:30:47 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 69430 00:13:51.355 04:30:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 69430 ']' 00:13:51.355 04:30:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 69430 00:13:51.355 04:30:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:13:51.355 04:30:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:51.355 04:30:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69430 00:13:51.355 04:30:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:51.355 killing process with pid 69430 00:13:51.355 04:30:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:51.355 04:30:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69430' 00:13:51.355 04:30:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 69430 00:13:51.355 04:30:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 69430 00:13:52.743 00:13:52.743 real 0m2.733s 00:13:52.743 user 0m2.829s 00:13:52.743 sys 0m0.353s 00:13:52.743 04:30:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:52.743 04:30:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:52.743 ************************************ 00:13:52.743 END TEST xnvme_rpc 00:13:52.743 ************************************ 00:13:53.005 04:30:49 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:13:53.005 04:30:49 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:53.005 04:30:49 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:53.005 04:30:49 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:53.005 ************************************ 00:13:53.005 START TEST xnvme_bdevperf 00:13:53.005 ************************************ 00:13:53.005 04:30:49 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:13:53.005 04:30:49 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:13:53.005 04:30:49 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=libaio 00:13:53.005 04:30:49 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:53.005 04:30:49 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:13:53.005 04:30:49 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:13:53.005 04:30:49 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:13:53.005 04:30:49 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:53.005 { 00:13:53.005 "subsystems": [ 00:13:53.005 { 00:13:53.005 "subsystem": "bdev", 00:13:53.005 "config": [ 00:13:53.005 { 00:13:53.005 "params": { 00:13:53.005 "io_mechanism": "libaio", 00:13:53.005 "conserve_cpu": true, 00:13:53.005 "filename": "/dev/nvme0n1", 00:13:53.005 "name": "xnvme_bdev" 00:13:53.005 }, 00:13:53.005 "method": "bdev_xnvme_create" 00:13:53.005 }, 00:13:53.005 { 00:13:53.005 "method": "bdev_wait_for_examine" 00:13:53.005 } 00:13:53.005 ] 00:13:53.005 } 00:13:53.005 ] 00:13:53.005 } 00:13:53.005 [2024-11-27 04:30:49.424261] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:13:53.005 [2024-11-27 04:30:49.424379] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69498 ] 00:13:53.005 [2024-11-27 04:30:49.582885] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:53.266 [2024-11-27 04:30:49.683541] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:53.524 Running I/O for 5 seconds... 00:13:55.397 39067.00 IOPS, 152.61 MiB/s [2024-11-27T04:30:53.385Z] 39223.50 IOPS, 153.22 MiB/s [2024-11-27T04:30:53.957Z] 38514.67 IOPS, 150.45 MiB/s [2024-11-27T04:30:55.330Z] 38467.25 IOPS, 150.26 MiB/s 00:13:58.743 Latency(us) 00:13:58.743 [2024-11-27T04:30:55.330Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:58.743 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:13:58.743 xnvme_bdev : 5.00 38318.56 149.68 0.00 0.00 1665.56 174.08 10435.35 00:13:58.743 [2024-11-27T04:30:55.330Z] =================================================================================================================== 00:13:58.743 [2024-11-27T04:30:55.330Z] Total : 38318.56 149.68 0.00 0.00 1665.56 174.08 10435.35 00:13:59.307 04:30:55 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:59.307 04:30:55 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:13:59.307 04:30:55 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:13:59.307 04:30:55 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:13:59.307 04:30:55 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:59.307 { 00:13:59.307 "subsystems": [ 00:13:59.307 { 00:13:59.307 "subsystem": "bdev", 00:13:59.307 "config": [ 00:13:59.307 { 00:13:59.307 "params": { 00:13:59.307 "io_mechanism": "libaio", 00:13:59.307 "conserve_cpu": true, 00:13:59.307 "filename": "/dev/nvme0n1", 00:13:59.307 "name": "xnvme_bdev" 00:13:59.307 }, 00:13:59.307 "method": "bdev_xnvme_create" 00:13:59.307 }, 00:13:59.307 { 00:13:59.307 "method": "bdev_wait_for_examine" 00:13:59.307 } 00:13:59.307 ] 00:13:59.307 } 00:13:59.307 ] 00:13:59.307 } 00:13:59.307 [2024-11-27 04:30:55.768672] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:13:59.307 [2024-11-27 04:30:55.768846] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69575 ] 00:13:59.565 [2024-11-27 04:30:55.929409] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:59.565 [2024-11-27 04:30:56.030206] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:59.822 Running I/O for 5 seconds... 00:14:01.809 32535.00 IOPS, 127.09 MiB/s [2024-11-27T04:30:59.329Z] 33692.00 IOPS, 131.61 MiB/s [2024-11-27T04:31:00.701Z] 33654.33 IOPS, 131.46 MiB/s [2024-11-27T04:31:01.634Z] 32789.50 IOPS, 128.08 MiB/s 00:14:05.047 Latency(us) 00:14:05.047 [2024-11-27T04:31:01.634Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:05.047 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:14:05.047 xnvme_bdev : 5.00 32428.13 126.67 0.00 0.00 1968.00 277.27 8922.98 00:14:05.047 [2024-11-27T04:31:01.634Z] =================================================================================================================== 00:14:05.047 [2024-11-27T04:31:01.634Z] Total : 32428.13 126.67 0.00 0.00 1968.00 277.27 8922.98 00:14:05.611 00:14:05.611 real 0m12.684s 00:14:05.611 user 0m4.798s 00:14:05.611 sys 0m5.838s 00:14:05.611 04:31:02 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:05.611 04:31:02 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:05.611 ************************************ 00:14:05.611 END TEST xnvme_bdevperf 00:14:05.611 ************************************ 00:14:05.611 04:31:02 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:14:05.611 04:31:02 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:05.611 04:31:02 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:05.611 04:31:02 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:05.611 ************************************ 00:14:05.611 START TEST xnvme_fio_plugin 00:14:05.611 ************************************ 00:14:05.611 04:31:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:14:05.611 04:31:02 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:14:05.611 04:31:02 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=libaio_fio 00:14:05.611 04:31:02 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:05.611 04:31:02 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:05.611 04:31:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:05.611 04:31:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:14:05.611 04:31:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:05.611 04:31:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:14:05.611 04:31:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:05.611 04:31:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:14:05.611 04:31:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:14:05.611 04:31:02 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:14:05.611 04:31:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:14:05.611 04:31:02 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:14:05.611 04:31:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:05.611 04:31:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:14:05.611 04:31:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:05.611 04:31:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:14:05.611 04:31:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:14:05.611 04:31:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:14:05.611 04:31:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:14:05.611 04:31:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:14:05.611 04:31:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:05.611 { 00:14:05.611 "subsystems": [ 00:14:05.611 { 00:14:05.611 "subsystem": "bdev", 00:14:05.611 "config": [ 00:14:05.611 { 00:14:05.611 "params": { 00:14:05.611 "io_mechanism": "libaio", 00:14:05.611 "conserve_cpu": true, 00:14:05.611 "filename": "/dev/nvme0n1", 00:14:05.611 "name": "xnvme_bdev" 00:14:05.611 }, 00:14:05.611 "method": "bdev_xnvme_create" 00:14:05.611 }, 00:14:05.611 { 00:14:05.611 "method": "bdev_wait_for_examine" 00:14:05.611 } 00:14:05.611 ] 00:14:05.611 } 00:14:05.611 ] 00:14:05.611 } 00:14:05.867 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:14:05.867 fio-3.35 00:14:05.867 Starting 1 thread 00:14:12.474 00:14:12.474 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=69694: Wed Nov 27 04:31:07 2024 00:14:12.474 read: IOPS=45.5k, BW=178MiB/s (186MB/s)(889MiB/5001msec) 00:14:12.474 slat (usec): min=3, max=1878, avg=18.79, stdev=25.65 00:14:12.474 clat (usec): min=82, max=5233, avg=833.95, stdev=505.93 00:14:12.474 lat (usec): min=142, max=5267, avg=852.74, stdev=508.94 00:14:12.474 clat percentiles (usec): 00:14:12.474 | 1.00th=[ 167], 5.00th=[ 237], 10.00th=[ 302], 20.00th=[ 424], 00:14:12.474 | 30.00th=[ 537], 40.00th=[ 644], 50.00th=[ 750], 60.00th=[ 857], 00:14:12.474 | 70.00th=[ 979], 80.00th=[ 1139], 90.00th=[ 1401], 95.00th=[ 1762], 00:14:12.474 | 99.00th=[ 2737], 99.50th=[ 3032], 99.90th=[ 3687], 99.95th=[ 3884], 00:14:12.474 | 99.99th=[ 4424] 00:14:12.474 bw ( KiB/s): min=163352, max=195304, per=99.87%, avg=181864.89, stdev=10256.75, samples=9 00:14:12.474 iops : min=40838, max=48826, avg=45466.22, stdev=2564.19, samples=9 00:14:12.474 lat (usec) : 100=0.01%, 250=5.95%, 500=20.54%, 750=23.44%, 1000=21.71% 00:14:12.474 lat (msec) : 2=24.87%, 4=3.46%, 10=0.03% 00:14:12.474 cpu : usr=26.72%, sys=53.70%, ctx=86, majf=0, minf=764 00:14:12.474 IO depths : 1=0.2%, 2=1.5%, 4=4.6%, 8=11.2%, 16=25.4%, 32=55.3%, >=64=1.8% 00:14:12.474 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:12.474 complete : 0=0.0%, 4=98.3%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.6%, >=64=0.0% 00:14:12.474 issued rwts: total=227666,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:12.474 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:12.474 00:14:12.474 Run status group 0 (all jobs): 00:14:12.474 READ: bw=178MiB/s (186MB/s), 178MiB/s-178MiB/s (186MB/s-186MB/s), io=889MiB (933MB), run=5001-5001msec 00:14:12.474 ----------------------------------------------------- 00:14:12.474 Suppressions used: 00:14:12.474 count bytes template 00:14:12.474 1 11 /usr/src/fio/parse.c 00:14:12.474 1 8 libtcmalloc_minimal.so 00:14:12.474 1 904 libcrypto.so 00:14:12.474 ----------------------------------------------------- 00:14:12.474 00:14:12.474 04:31:08 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:12.474 04:31:08 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:12.474 04:31:08 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:14:12.474 04:31:08 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:14:12.474 04:31:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:12.474 04:31:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:12.474 04:31:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:14:12.474 04:31:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:12.474 04:31:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:14:12.474 04:31:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:12.474 04:31:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:14:12.474 04:31:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:14:12.474 04:31:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:14:12.474 04:31:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:14:12.474 04:31:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:12.474 04:31:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:14:12.474 04:31:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:14:12.474 04:31:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:14:12.474 04:31:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:14:12.474 04:31:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:14:12.474 04:31:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:12.474 { 00:14:12.474 "subsystems": [ 00:14:12.474 { 00:14:12.474 "subsystem": "bdev", 00:14:12.474 "config": [ 00:14:12.474 { 00:14:12.474 "params": { 00:14:12.474 "io_mechanism": "libaio", 00:14:12.474 "conserve_cpu": true, 00:14:12.474 "filename": "/dev/nvme0n1", 00:14:12.474 "name": "xnvme_bdev" 00:14:12.474 }, 00:14:12.474 "method": "bdev_xnvme_create" 00:14:12.475 }, 00:14:12.475 { 00:14:12.475 "method": "bdev_wait_for_examine" 00:14:12.475 } 00:14:12.475 ] 00:14:12.475 } 00:14:12.475 ] 00:14:12.475 } 00:14:12.732 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:14:12.732 fio-3.35 00:14:12.732 Starting 1 thread 00:14:19.302 00:14:19.302 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=69780: Wed Nov 27 04:31:14 2024 00:14:19.302 write: IOPS=37.8k, BW=148MiB/s (155MB/s)(739MiB/5001msec); 0 zone resets 00:14:19.302 slat (usec): min=3, max=696, avg=20.02, stdev=25.21 00:14:19.302 clat (usec): min=19, max=204059, avg=1077.05, stdev=5518.44 00:14:19.302 lat (usec): min=48, max=204071, avg=1097.07, stdev=5518.56 00:14:19.302 clat percentiles (usec): 00:14:19.302 | 1.00th=[ 172], 5.00th=[ 251], 10.00th=[ 326], 20.00th=[ 457], 00:14:19.302 | 30.00th=[ 570], 40.00th=[ 668], 50.00th=[ 775], 60.00th=[ 898], 00:14:19.302 | 70.00th=[ 1037], 80.00th=[ 1221], 90.00th=[ 1549], 95.00th=[ 1926], 00:14:19.302 | 99.00th=[ 2900], 99.50th=[ 3326], 99.90th=[126354], 99.95th=[160433], 00:14:19.302 | 99.99th=[204473] 00:14:19.302 bw ( KiB/s): min=50648, max=181760, per=96.68%, avg=146286.22, stdev=39715.81, samples=9 00:14:19.302 iops : min=12662, max=45440, avg=36571.56, stdev=9928.95, samples=9 00:14:19.302 lat (usec) : 20=0.01%, 50=0.01%, 100=0.01%, 250=4.91%, 500=19.04% 00:14:19.302 lat (usec) : 750=23.99%, 1000=19.79% 00:14:19.302 lat (msec) : 2=27.84%, 4=4.24%, 10=0.05%, 50=0.01%, 100=0.03% 00:14:19.302 lat (msec) : 250=0.10% 00:14:19.302 cpu : usr=35.24%, sys=47.16%, ctx=44, majf=0, minf=764 00:14:19.302 IO depths : 1=0.2%, 2=1.6%, 4=4.7%, 8=11.1%, 16=24.8%, 32=55.7%, >=64=1.8% 00:14:19.302 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:19.302 complete : 0=0.0%, 4=98.2%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.6%, >=64=0.0% 00:14:19.303 issued rwts: total=0,189181,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:19.303 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:19.303 00:14:19.303 Run status group 0 (all jobs): 00:14:19.303 WRITE: bw=148MiB/s (155MB/s), 148MiB/s-148MiB/s (155MB/s-155MB/s), io=739MiB (775MB), run=5001-5001msec 00:14:19.303 ----------------------------------------------------- 00:14:19.303 Suppressions used: 00:14:19.303 count bytes template 00:14:19.303 1 11 /usr/src/fio/parse.c 00:14:19.303 1 8 libtcmalloc_minimal.so 00:14:19.303 1 904 libcrypto.so 00:14:19.303 ----------------------------------------------------- 00:14:19.303 00:14:19.303 00:14:19.303 real 0m13.519s 00:14:19.303 user 0m5.736s 00:14:19.303 sys 0m5.529s 00:14:19.303 04:31:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:19.303 04:31:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:19.303 ************************************ 00:14:19.303 END TEST xnvme_fio_plugin 00:14:19.303 ************************************ 00:14:19.303 04:31:15 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:14:19.303 04:31:15 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:14:19.303 04:31:15 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/nvme0n1 00:14:19.303 04:31:15 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/nvme0n1 00:14:19.303 04:31:15 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:14:19.303 04:31:15 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:14:19.303 04:31:15 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:14:19.303 04:31:15 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:14:19.303 04:31:15 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:14:19.303 04:31:15 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:19.303 04:31:15 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:19.303 04:31:15 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:19.303 ************************************ 00:14:19.303 START TEST xnvme_rpc 00:14:19.303 ************************************ 00:14:19.303 04:31:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:14:19.303 04:31:15 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:14:19.303 04:31:15 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:14:19.303 04:31:15 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:14:19.303 04:31:15 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:14:19.303 04:31:15 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=69866 00:14:19.303 04:31:15 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 69866 00:14:19.303 04:31:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 69866 ']' 00:14:19.303 04:31:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:19.303 04:31:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:19.303 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:19.303 04:31:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:19.303 04:31:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:19.303 04:31:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:19.303 04:31:15 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:19.303 [2024-11-27 04:31:15.701495] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:14:19.303 [2024-11-27 04:31:15.701595] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69866 ] 00:14:19.303 [2024-11-27 04:31:15.853759] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:19.563 [2024-11-27 04:31:15.953573] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:20.133 04:31:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:20.133 04:31:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:14:20.133 04:31:16 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev io_uring '' 00:14:20.133 04:31:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.133 04:31:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:20.133 xnvme_bdev 00:14:20.133 04:31:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.133 04:31:16 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:14:20.133 04:31:16 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:14:20.133 04:31:16 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:20.133 04:31:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.133 04:31:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:20.133 04:31:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.133 04:31:16 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:14:20.133 04:31:16 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:14:20.133 04:31:16 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:20.133 04:31:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.133 04:31:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:20.133 04:31:16 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:14:20.133 04:31:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.133 04:31:16 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:14:20.133 04:31:16 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:14:20.133 04:31:16 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:20.133 04:31:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.133 04:31:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:20.133 04:31:16 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:14:20.133 04:31:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.133 04:31:16 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring == \i\o\_\u\r\i\n\g ]] 00:14:20.133 04:31:16 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:14:20.133 04:31:16 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:14:20.133 04:31:16 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:20.133 04:31:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.133 04:31:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:20.133 04:31:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.133 04:31:16 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:14:20.133 04:31:16 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:14:20.133 04:31:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.133 04:31:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:20.392 04:31:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.392 04:31:16 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 69866 00:14:20.392 04:31:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 69866 ']' 00:14:20.392 04:31:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 69866 00:14:20.392 04:31:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:14:20.392 04:31:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:20.392 04:31:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69866 00:14:20.392 killing process with pid 69866 00:14:20.392 04:31:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:20.392 04:31:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:20.392 04:31:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69866' 00:14:20.392 04:31:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 69866 00:14:20.392 04:31:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 69866 00:14:21.769 00:14:21.769 real 0m2.606s 00:14:21.769 user 0m2.763s 00:14:21.769 sys 0m0.327s 00:14:21.769 ************************************ 00:14:21.769 END TEST xnvme_rpc 00:14:21.769 ************************************ 00:14:21.769 04:31:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:21.769 04:31:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:21.769 04:31:18 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:14:21.769 04:31:18 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:21.769 04:31:18 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:21.769 04:31:18 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:21.769 ************************************ 00:14:21.769 START TEST xnvme_bdevperf 00:14:21.769 ************************************ 00:14:21.769 04:31:18 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:14:21.769 04:31:18 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:14:21.769 04:31:18 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring 00:14:21.769 04:31:18 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:21.769 04:31:18 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:14:21.769 04:31:18 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:21.769 04:31:18 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:21.769 04:31:18 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:21.769 { 00:14:21.769 "subsystems": [ 00:14:21.769 { 00:14:21.769 "subsystem": "bdev", 00:14:21.769 "config": [ 00:14:21.769 { 00:14:21.769 "params": { 00:14:21.769 "io_mechanism": "io_uring", 00:14:21.769 "conserve_cpu": false, 00:14:21.769 "filename": "/dev/nvme0n1", 00:14:21.769 "name": "xnvme_bdev" 00:14:21.769 }, 00:14:21.769 "method": "bdev_xnvme_create" 00:14:21.769 }, 00:14:21.769 { 00:14:21.769 "method": "bdev_wait_for_examine" 00:14:21.769 } 00:14:21.769 ] 00:14:21.769 } 00:14:21.769 ] 00:14:21.769 } 00:14:21.769 [2024-11-27 04:31:18.348579] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:14:21.769 [2024-11-27 04:31:18.348696] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69935 ] 00:14:22.027 [2024-11-27 04:31:18.503849] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:22.028 [2024-11-27 04:31:18.590447] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:22.288 Running I/O for 5 seconds... 00:14:24.603 58255.00 IOPS, 227.56 MiB/s [2024-11-27T04:31:22.127Z] 58989.00 IOPS, 230.43 MiB/s [2024-11-27T04:31:23.065Z] 58902.67 IOPS, 230.09 MiB/s [2024-11-27T04:31:24.008Z] 60196.25 IOPS, 235.14 MiB/s [2024-11-27T04:31:24.008Z] 60242.00 IOPS, 235.32 MiB/s 00:14:27.421 Latency(us) 00:14:27.421 [2024-11-27T04:31:24.008Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:27.421 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:14:27.421 xnvme_bdev : 5.05 59701.51 233.21 0.00 0.00 1058.56 359.19 129055.51 00:14:27.421 [2024-11-27T04:31:24.008Z] =================================================================================================================== 00:14:27.421 [2024-11-27T04:31:24.008Z] Total : 59701.51 233.21 0.00 0.00 1058.56 359.19 129055.51 00:14:27.993 04:31:24 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:27.993 04:31:24 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:27.993 04:31:24 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:14:27.993 04:31:24 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:27.993 04:31:24 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:28.253 { 00:14:28.253 "subsystems": [ 00:14:28.253 { 00:14:28.253 "subsystem": "bdev", 00:14:28.253 "config": [ 00:14:28.253 { 00:14:28.253 "params": { 00:14:28.253 "io_mechanism": "io_uring", 00:14:28.253 "conserve_cpu": false, 00:14:28.253 "filename": "/dev/nvme0n1", 00:14:28.253 "name": "xnvme_bdev" 00:14:28.253 }, 00:14:28.253 "method": "bdev_xnvme_create" 00:14:28.253 }, 00:14:28.253 { 00:14:28.253 "method": "bdev_wait_for_examine" 00:14:28.253 } 00:14:28.253 ] 00:14:28.253 } 00:14:28.253 ] 00:14:28.253 } 00:14:28.253 [2024-11-27 04:31:24.623910] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:14:28.253 [2024-11-27 04:31:24.624176] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70010 ] 00:14:28.253 [2024-11-27 04:31:24.789776] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:28.513 [2024-11-27 04:31:24.926353] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:28.774 Running I/O for 5 seconds... 00:14:30.707 58400.00 IOPS, 228.12 MiB/s [2024-11-27T04:31:28.234Z] 57185.00 IOPS, 223.38 MiB/s [2024-11-27T04:31:29.614Z] 52383.33 IOPS, 204.62 MiB/s [2024-11-27T04:31:30.183Z] 52704.00 IOPS, 205.88 MiB/s [2024-11-27T04:31:30.443Z] 47937.40 IOPS, 187.26 MiB/s 00:14:33.856 Latency(us) 00:14:33.856 [2024-11-27T04:31:30.443Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:33.856 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:14:33.856 xnvme_bdev : 5.23 45836.87 179.05 0.00 0.00 1391.12 143.36 687220.58 00:14:33.857 [2024-11-27T04:31:30.444Z] =================================================================================================================== 00:14:33.857 [2024-11-27T04:31:30.444Z] Total : 45836.87 179.05 0.00 0.00 1391.12 143.36 687220.58 00:14:34.795 ************************************ 00:14:34.795 END TEST xnvme_bdevperf 00:14:34.795 ************************************ 00:14:34.795 00:14:34.795 real 0m12.835s 00:14:34.795 user 0m6.436s 00:14:34.795 sys 0m6.160s 00:14:34.795 04:31:31 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:34.795 04:31:31 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:34.795 04:31:31 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:14:34.795 04:31:31 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:34.795 04:31:31 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:34.795 04:31:31 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:34.795 ************************************ 00:14:34.795 START TEST xnvme_fio_plugin 00:14:34.795 ************************************ 00:14:34.795 04:31:31 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:14:34.795 04:31:31 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:14:34.795 04:31:31 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_fio 00:14:34.795 04:31:31 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:34.795 04:31:31 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:34.795 04:31:31 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:34.795 04:31:31 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:14:34.795 04:31:31 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:34.795 04:31:31 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:14:34.795 04:31:31 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:34.795 04:31:31 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:14:34.795 04:31:31 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:14:34.795 04:31:31 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:14:34.795 04:31:31 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:14:34.795 04:31:31 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:14:34.795 04:31:31 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:34.795 04:31:31 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:14:34.795 04:31:31 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:34.795 04:31:31 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:14:34.795 04:31:31 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:14:34.795 04:31:31 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:14:34.795 04:31:31 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:14:34.795 04:31:31 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:14:34.795 04:31:31 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:34.795 { 00:14:34.795 "subsystems": [ 00:14:34.795 { 00:14:34.795 "subsystem": "bdev", 00:14:34.795 "config": [ 00:14:34.795 { 00:14:34.795 "params": { 00:14:34.795 "io_mechanism": "io_uring", 00:14:34.795 "conserve_cpu": false, 00:14:34.795 "filename": "/dev/nvme0n1", 00:14:34.795 "name": "xnvme_bdev" 00:14:34.795 }, 00:14:34.795 "method": "bdev_xnvme_create" 00:14:34.795 }, 00:14:34.795 { 00:14:34.795 "method": "bdev_wait_for_examine" 00:14:34.795 } 00:14:34.795 ] 00:14:34.795 } 00:14:34.795 ] 00:14:34.795 } 00:14:34.795 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:14:34.795 fio-3.35 00:14:34.795 Starting 1 thread 00:14:41.485 00:14:41.485 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=70131: Wed Nov 27 04:31:36 2024 00:14:41.485 read: IOPS=62.0k, BW=242MiB/s (254MB/s)(1211MiB/5001msec) 00:14:41.485 slat (nsec): min=2137, max=64030, avg=3611.54, stdev=1402.16 00:14:41.485 clat (usec): min=156, max=113527, avg=894.33, stdev=391.76 00:14:41.485 lat (usec): min=159, max=113531, avg=897.94, stdev=391.99 00:14:41.485 clat percentiles (usec): 00:14:41.485 | 1.00th=[ 635], 5.00th=[ 668], 10.00th=[ 693], 20.00th=[ 742], 00:14:41.485 | 30.00th=[ 775], 40.00th=[ 807], 50.00th=[ 848], 60.00th=[ 889], 00:14:41.485 | 70.00th=[ 947], 80.00th=[ 1029], 90.00th=[ 1139], 95.00th=[ 1254], 00:14:41.485 | 99.00th=[ 1565], 99.50th=[ 1745], 99.90th=[ 2212], 99.95th=[ 2606], 00:14:41.485 | 99.99th=[ 3949] 00:14:41.485 bw ( KiB/s): min=226816, max=273408, per=100.00%, avg=251951.11, stdev=17307.99, samples=9 00:14:41.485 iops : min=56704, max=68352, avg=62987.78, stdev=4327.00, samples=9 00:14:41.485 lat (usec) : 250=0.01%, 500=0.05%, 750=23.21%, 1000=53.67% 00:14:41.485 lat (msec) : 2=22.85%, 4=0.21%, 10=0.01%, 50=0.01%, 100=0.01% 00:14:41.485 lat (msec) : 250=0.01% 00:14:41.485 cpu : usr=37.76%, sys=61.44%, ctx=14, majf=0, minf=762 00:14:41.485 IO depths : 1=1.3%, 2=2.9%, 4=6.2%, 8=12.4%, 16=25.1%, 32=50.5%, >=64=1.6% 00:14:41.485 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:41.485 complete : 0=0.0%, 4=98.4%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.5%, >=64=0.0% 00:14:41.485 issued rwts: total=309926,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:41.485 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:41.485 00:14:41.485 Run status group 0 (all jobs): 00:14:41.485 READ: bw=242MiB/s (254MB/s), 242MiB/s-242MiB/s (254MB/s-254MB/s), io=1211MiB (1269MB), run=5001-5001msec 00:14:41.746 ----------------------------------------------------- 00:14:41.746 Suppressions used: 00:14:41.746 count bytes template 00:14:41.746 1 11 /usr/src/fio/parse.c 00:14:41.746 1 8 libtcmalloc_minimal.so 00:14:41.746 1 904 libcrypto.so 00:14:41.746 ----------------------------------------------------- 00:14:41.746 00:14:41.746 04:31:37 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:41.746 04:31:37 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:41.746 04:31:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:41.746 04:31:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:14:41.746 04:31:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:41.746 04:31:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:14:41.746 04:31:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:41.746 04:31:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:14:41.746 04:31:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:14:41.746 04:31:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:14:41.746 04:31:37 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:14:41.746 04:31:37 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:14:41.746 04:31:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:41.746 04:31:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:41.746 04:31:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:14:41.746 04:31:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:14:41.746 04:31:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:14:41.746 04:31:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:14:41.746 04:31:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:14:41.746 04:31:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:14:41.746 04:31:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:41.746 { 00:14:41.746 "subsystems": [ 00:14:41.746 { 00:14:41.746 "subsystem": "bdev", 00:14:41.746 "config": [ 00:14:41.746 { 00:14:41.746 "params": { 00:14:41.746 "io_mechanism": "io_uring", 00:14:41.746 "conserve_cpu": false, 00:14:41.746 "filename": "/dev/nvme0n1", 00:14:41.746 "name": "xnvme_bdev" 00:14:41.746 }, 00:14:41.746 "method": "bdev_xnvme_create" 00:14:41.746 }, 00:14:41.746 { 00:14:41.746 "method": "bdev_wait_for_examine" 00:14:41.746 } 00:14:41.746 ] 00:14:41.746 } 00:14:41.746 ] 00:14:41.746 } 00:14:41.746 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:14:41.746 fio-3.35 00:14:41.746 Starting 1 thread 00:14:48.334 00:14:48.334 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=70223: Wed Nov 27 04:31:43 2024 00:14:48.334 write: IOPS=57.0k, BW=223MiB/s (233MB/s)(1113MiB/5001msec); 0 zone resets 00:14:48.334 slat (nsec): min=2833, max=67357, avg=3856.78, stdev=1701.62 00:14:48.334 clat (usec): min=157, max=213972, avg=973.04, stdev=3174.57 00:14:48.334 lat (usec): min=161, max=213980, avg=976.90, stdev=3174.67 00:14:48.334 clat percentiles (usec): 00:14:48.334 | 1.00th=[ 652], 5.00th=[ 685], 10.00th=[ 717], 20.00th=[ 758], 00:14:48.334 | 30.00th=[ 799], 40.00th=[ 840], 50.00th=[ 881], 60.00th=[ 930], 00:14:48.334 | 70.00th=[ 996], 80.00th=[ 1074], 90.00th=[ 1188], 95.00th=[ 1319], 00:14:48.334 | 99.00th=[ 1598], 99.50th=[ 1762], 99.90th=[ 2671], 99.95th=[ 2999], 00:14:48.334 | 99.99th=[212861] 00:14:48.334 bw ( KiB/s): min=175288, max=249344, per=100.00%, avg=228249.78, stdev=23425.36, samples=9 00:14:48.334 iops : min=43822, max=62336, avg=57062.44, stdev=5856.34, samples=9 00:14:48.334 lat (usec) : 250=0.01%, 500=0.04%, 750=18.17%, 1000=52.62% 00:14:48.334 lat (msec) : 2=28.92%, 4=0.21%, 10=0.01%, 250=0.02% 00:14:48.334 cpu : usr=39.44%, sys=59.74%, ctx=11, majf=0, minf=762 00:14:48.334 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.1%, >=64=1.6% 00:14:48.334 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:48.334 complete : 0=0.0%, 4=98.5%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.5%, >=64=0.0% 00:14:48.334 issued rwts: total=0,284980,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:48.334 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:48.334 00:14:48.334 Run status group 0 (all jobs): 00:14:48.334 WRITE: bw=223MiB/s (233MB/s), 223MiB/s-223MiB/s (233MB/s-233MB/s), io=1113MiB (1167MB), run=5001-5001msec 00:14:48.334 ----------------------------------------------------- 00:14:48.334 Suppressions used: 00:14:48.334 count bytes template 00:14:48.334 1 11 /usr/src/fio/parse.c 00:14:48.334 1 8 libtcmalloc_minimal.so 00:14:48.334 1 904 libcrypto.so 00:14:48.334 ----------------------------------------------------- 00:14:48.334 00:14:48.334 00:14:48.334 real 0m13.471s 00:14:48.334 user 0m6.533s 00:14:48.334 sys 0m6.525s 00:14:48.334 04:31:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:48.334 ************************************ 00:14:48.334 END TEST xnvme_fio_plugin 00:14:48.334 ************************************ 00:14:48.334 04:31:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:48.334 04:31:44 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:14:48.334 04:31:44 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:14:48.334 04:31:44 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:14:48.334 04:31:44 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:14:48.334 04:31:44 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:48.334 04:31:44 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:48.334 04:31:44 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:48.334 ************************************ 00:14:48.334 START TEST xnvme_rpc 00:14:48.334 ************************************ 00:14:48.334 04:31:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:14:48.334 04:31:44 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:14:48.334 04:31:44 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:14:48.334 04:31:44 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:14:48.334 04:31:44 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:14:48.334 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:48.334 04:31:44 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=70309 00:14:48.334 04:31:44 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:48.334 04:31:44 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 70309 00:14:48.334 04:31:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 70309 ']' 00:14:48.334 04:31:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:48.334 04:31:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:48.334 04:31:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:48.334 04:31:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:48.334 04:31:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:48.334 [2024-11-27 04:31:44.739773] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:14:48.334 [2024-11-27 04:31:44.740187] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70309 ] 00:14:48.334 [2024-11-27 04:31:44.901433] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:48.595 [2024-11-27 04:31:44.999082] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:49.167 04:31:45 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:49.167 04:31:45 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:14:49.167 04:31:45 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev io_uring -c 00:14:49.167 04:31:45 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.167 04:31:45 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:49.167 xnvme_bdev 00:14:49.167 04:31:45 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.167 04:31:45 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:14:49.167 04:31:45 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:14:49.167 04:31:45 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:49.167 04:31:45 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.167 04:31:45 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:49.167 04:31:45 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.167 04:31:45 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:14:49.167 04:31:45 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:14:49.167 04:31:45 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:49.167 04:31:45 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:14:49.167 04:31:45 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.167 04:31:45 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:49.167 04:31:45 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.167 04:31:45 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:14:49.167 04:31:45 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:14:49.167 04:31:45 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:49.167 04:31:45 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.167 04:31:45 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:49.167 04:31:45 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:14:49.167 04:31:45 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.167 04:31:45 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring == \i\o\_\u\r\i\n\g ]] 00:14:49.167 04:31:45 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:14:49.167 04:31:45 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:14:49.167 04:31:45 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:49.167 04:31:45 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.167 04:31:45 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:49.167 04:31:45 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.167 04:31:45 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:14:49.167 04:31:45 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:14:49.167 04:31:45 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.167 04:31:45 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:49.167 04:31:45 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.167 04:31:45 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 70309 00:14:49.167 04:31:45 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 70309 ']' 00:14:49.167 04:31:45 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 70309 00:14:49.431 04:31:45 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:14:49.431 04:31:45 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:49.431 04:31:45 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70309 00:14:49.431 killing process with pid 70309 00:14:49.431 04:31:45 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:49.431 04:31:45 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:49.431 04:31:45 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70309' 00:14:49.431 04:31:45 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 70309 00:14:49.431 04:31:45 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 70309 00:14:50.816 ************************************ 00:14:50.816 END TEST xnvme_rpc 00:14:50.816 ************************************ 00:14:50.816 00:14:50.816 real 0m2.621s 00:14:50.816 user 0m2.757s 00:14:50.816 sys 0m0.352s 00:14:50.816 04:31:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:50.816 04:31:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:50.816 04:31:47 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:14:50.816 04:31:47 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:50.816 04:31:47 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:50.816 04:31:47 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:50.816 ************************************ 00:14:50.816 START TEST xnvme_bdevperf 00:14:50.816 ************************************ 00:14:50.816 04:31:47 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:14:50.816 04:31:47 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:14:50.816 04:31:47 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring 00:14:50.816 04:31:47 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:50.816 04:31:47 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:14:50.816 04:31:47 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:50.816 04:31:47 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:50.816 04:31:47 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:50.816 { 00:14:50.816 "subsystems": [ 00:14:50.816 { 00:14:50.816 "subsystem": "bdev", 00:14:50.816 "config": [ 00:14:50.816 { 00:14:50.816 "params": { 00:14:50.816 "io_mechanism": "io_uring", 00:14:50.816 "conserve_cpu": true, 00:14:50.816 "filename": "/dev/nvme0n1", 00:14:50.816 "name": "xnvme_bdev" 00:14:50.816 }, 00:14:50.816 "method": "bdev_xnvme_create" 00:14:50.816 }, 00:14:50.816 { 00:14:50.816 "method": "bdev_wait_for_examine" 00:14:50.816 } 00:14:50.816 ] 00:14:50.816 } 00:14:50.816 ] 00:14:50.816 } 00:14:50.816 [2024-11-27 04:31:47.382826] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:14:50.816 [2024-11-27 04:31:47.383116] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70373 ] 00:14:51.076 [2024-11-27 04:31:47.559906] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:51.363 [2024-11-27 04:31:47.712121] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:51.623 Running I/O for 5 seconds... 00:14:53.501 57675.00 IOPS, 225.29 MiB/s [2024-11-27T04:31:51.029Z] 56480.50 IOPS, 220.63 MiB/s [2024-11-27T04:31:52.020Z] 57045.33 IOPS, 222.83 MiB/s [2024-11-27T04:31:52.998Z] 52343.00 IOPS, 204.46 MiB/s [2024-11-27T04:31:52.998Z] 49165.20 IOPS, 192.05 MiB/s 00:14:56.411 Latency(us) 00:14:56.411 [2024-11-27T04:31:52.998Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:56.411 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:14:56.411 xnvme_bdev : 5.01 49103.44 191.81 0.00 0.00 1298.28 469.46 85095.98 00:14:56.411 [2024-11-27T04:31:52.998Z] =================================================================================================================== 00:14:56.411 [2024-11-27T04:31:52.998Z] Total : 49103.44 191.81 0.00 0.00 1298.28 469.46 85095.98 00:14:57.351 04:31:53 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:57.351 04:31:53 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:14:57.351 04:31:53 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:57.351 04:31:53 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:57.351 04:31:53 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:57.351 { 00:14:57.351 "subsystems": [ 00:14:57.351 { 00:14:57.351 "subsystem": "bdev", 00:14:57.351 "config": [ 00:14:57.351 { 00:14:57.351 "params": { 00:14:57.351 "io_mechanism": "io_uring", 00:14:57.351 "conserve_cpu": true, 00:14:57.351 "filename": "/dev/nvme0n1", 00:14:57.351 "name": "xnvme_bdev" 00:14:57.351 }, 00:14:57.351 "method": "bdev_xnvme_create" 00:14:57.351 }, 00:14:57.351 { 00:14:57.351 "method": "bdev_wait_for_examine" 00:14:57.351 } 00:14:57.351 ] 00:14:57.351 } 00:14:57.351 ] 00:14:57.351 } 00:14:57.351 [2024-11-27 04:31:53.796977] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:14:57.351 [2024-11-27 04:31:53.797285] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70454 ] 00:14:57.611 [2024-11-27 04:31:53.957429] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:57.611 [2024-11-27 04:31:54.067699] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:57.870 Running I/O for 5 seconds... 00:14:59.786 10930.00 IOPS, 42.70 MiB/s [2024-11-27T04:31:57.758Z] 8454.50 IOPS, 33.03 MiB/s [2024-11-27T04:31:58.702Z] 8982.33 IOPS, 35.09 MiB/s [2024-11-27T04:31:59.710Z] 8424.00 IOPS, 32.91 MiB/s [2024-11-27T04:31:59.710Z] 8730.20 IOPS, 34.10 MiB/s 00:15:03.123 Latency(us) 00:15:03.123 [2024-11-27T04:31:59.710Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:03.123 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:15:03.123 xnvme_bdev : 5.23 8354.96 32.64 0.00 0.00 7647.23 118.94 390392.91 00:15:03.123 [2024-11-27T04:31:59.710Z] =================================================================================================================== 00:15:03.123 [2024-11-27T04:31:59.710Z] Total : 8354.96 32.64 0.00 0.00 7647.23 118.94 390392.91 00:15:04.065 00:15:04.065 real 0m13.057s 00:15:04.065 user 0m9.382s 00:15:04.065 sys 0m3.191s 00:15:04.065 ************************************ 00:15:04.065 END TEST xnvme_bdevperf 00:15:04.065 ************************************ 00:15:04.065 04:32:00 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:04.065 04:32:00 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:04.065 04:32:00 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:15:04.065 04:32:00 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:04.065 04:32:00 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:04.065 04:32:00 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:04.065 ************************************ 00:15:04.065 START TEST xnvme_fio_plugin 00:15:04.065 ************************************ 00:15:04.065 04:32:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:15:04.065 04:32:00 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:15:04.065 04:32:00 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_fio 00:15:04.065 04:32:00 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:04.065 04:32:00 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:04.065 04:32:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:04.065 04:32:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:15:04.065 04:32:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:04.065 04:32:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:15:04.065 04:32:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:04.065 04:32:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:15:04.065 04:32:00 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:15:04.065 04:32:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:15:04.065 04:32:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:15:04.065 04:32:00 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:15:04.065 04:32:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:15:04.065 04:32:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:04.065 04:32:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:15:04.065 04:32:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:15:04.065 04:32:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:04.065 04:32:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:04.065 04:32:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:15:04.066 04:32:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:04.066 04:32:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:04.066 { 00:15:04.066 "subsystems": [ 00:15:04.066 { 00:15:04.066 "subsystem": "bdev", 00:15:04.066 "config": [ 00:15:04.066 { 00:15:04.066 "params": { 00:15:04.066 "io_mechanism": "io_uring", 00:15:04.066 "conserve_cpu": true, 00:15:04.066 "filename": "/dev/nvme0n1", 00:15:04.066 "name": "xnvme_bdev" 00:15:04.066 }, 00:15:04.066 "method": "bdev_xnvme_create" 00:15:04.066 }, 00:15:04.066 { 00:15:04.066 "method": "bdev_wait_for_examine" 00:15:04.066 } 00:15:04.066 ] 00:15:04.066 } 00:15:04.066 ] 00:15:04.066 } 00:15:04.327 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:15:04.327 fio-3.35 00:15:04.327 Starting 1 thread 00:15:10.959 00:15:10.959 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=70573: Wed Nov 27 04:32:06 2024 00:15:10.959 read: IOPS=31.9k, BW=125MiB/s (131MB/s)(623MiB/5002msec) 00:15:10.959 slat (usec): min=2, max=167, avg= 4.18, stdev= 2.59 00:15:10.959 clat (usec): min=1038, max=3764, avg=1834.25, stdev=339.83 00:15:10.959 lat (usec): min=1041, max=3781, avg=1838.44, stdev=340.41 00:15:10.959 clat percentiles (usec): 00:15:10.959 | 1.00th=[ 1221], 5.00th=[ 1352], 10.00th=[ 1434], 20.00th=[ 1549], 00:15:10.959 | 30.00th=[ 1631], 40.00th=[ 1713], 50.00th=[ 1795], 60.00th=[ 1876], 00:15:10.959 | 70.00th=[ 1975], 80.00th=[ 2089], 90.00th=[ 2278], 95.00th=[ 2442], 00:15:10.959 | 99.00th=[ 2802], 99.50th=[ 2933], 99.90th=[ 3195], 99.95th=[ 3425], 00:15:10.959 | 99.99th=[ 3687] 00:15:10.959 bw ( KiB/s): min=120320, max=135168, per=99.54%, avg=127004.11, stdev=4112.65, samples=9 00:15:10.959 iops : min=30080, max=33792, avg=31751.00, stdev=1028.14, samples=9 00:15:10.959 lat (msec) : 2=72.83%, 4=27.17% 00:15:10.959 cpu : usr=48.65%, sys=46.95%, ctx=11, majf=0, minf=762 00:15:10.959 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:15:10.959 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:10.959 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:15:10.959 issued rwts: total=159552,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:10.959 latency : target=0, window=0, percentile=100.00%, depth=64 00:15:10.959 00:15:10.959 Run status group 0 (all jobs): 00:15:10.959 READ: bw=125MiB/s (131MB/s), 125MiB/s-125MiB/s (131MB/s-131MB/s), io=623MiB (654MB), run=5002-5002msec 00:15:10.959 ----------------------------------------------------- 00:15:10.959 Suppressions used: 00:15:10.959 count bytes template 00:15:10.959 1 11 /usr/src/fio/parse.c 00:15:10.959 1 8 libtcmalloc_minimal.so 00:15:10.959 1 904 libcrypto.so 00:15:10.959 ----------------------------------------------------- 00:15:10.959 00:15:10.959 04:32:07 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:10.959 04:32:07 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:10.959 04:32:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:10.959 04:32:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:15:10.959 04:32:07 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:15:10.959 04:32:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:10.959 04:32:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:15:10.959 04:32:07 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:15:10.959 04:32:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:10.959 04:32:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:15:10.959 04:32:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:15:10.959 04:32:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:15:10.959 04:32:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:15:10.959 04:32:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:10.959 04:32:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:15:10.959 04:32:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:15:10.959 04:32:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:10.959 04:32:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:10.959 04:32:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:15:10.959 04:32:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:10.959 04:32:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:10.959 { 00:15:10.959 "subsystems": [ 00:15:10.959 { 00:15:10.959 "subsystem": "bdev", 00:15:10.959 "config": [ 00:15:10.959 { 00:15:10.959 "params": { 00:15:10.959 "io_mechanism": "io_uring", 00:15:10.959 "conserve_cpu": true, 00:15:10.959 "filename": "/dev/nvme0n1", 00:15:10.959 "name": "xnvme_bdev" 00:15:10.959 }, 00:15:10.959 "method": "bdev_xnvme_create" 00:15:10.959 }, 00:15:10.959 { 00:15:10.959 "method": "bdev_wait_for_examine" 00:15:10.959 } 00:15:10.959 ] 00:15:10.959 } 00:15:10.959 ] 00:15:10.959 } 00:15:11.221 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:15:11.221 fio-3.35 00:15:11.221 Starting 1 thread 00:15:17.814 00:15:17.814 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=70667: Wed Nov 27 04:32:13 2024 00:15:17.814 write: IOPS=31.0k, BW=121MiB/s (127MB/s)(606MiB/5001msec); 0 zone resets 00:15:17.814 slat (usec): min=2, max=100, avg= 4.66, stdev= 2.86 00:15:17.814 clat (usec): min=214, max=54636, avg=1871.97, stdev=1192.22 00:15:17.814 lat (usec): min=218, max=54640, avg=1876.63, stdev=1192.40 00:15:17.814 clat percentiles (usec): 00:15:17.814 | 1.00th=[ 1205], 5.00th=[ 1352], 10.00th=[ 1434], 20.00th=[ 1549], 00:15:17.814 | 30.00th=[ 1647], 40.00th=[ 1729], 50.00th=[ 1795], 60.00th=[ 1876], 00:15:17.814 | 70.00th=[ 1975], 80.00th=[ 2089], 90.00th=[ 2278], 95.00th=[ 2409], 00:15:17.814 | 99.00th=[ 2802], 99.50th=[ 2999], 99.90th=[16319], 99.95th=[21365], 00:15:17.814 | 99.99th=[53216] 00:15:17.814 bw ( KiB/s): min=113784, max=129536, per=99.61%, avg=123651.67, stdev=4689.41, samples=9 00:15:17.814 iops : min=28446, max=32384, avg=30912.89, stdev=1172.36, samples=9 00:15:17.814 lat (usec) : 250=0.01%, 500=0.01%, 750=0.02%, 1000=0.11% 00:15:17.814 lat (msec) : 2=72.64%, 4=26.99%, 10=0.04%, 20=0.12%, 50=0.03% 00:15:17.814 lat (msec) : 100=0.03% 00:15:17.814 cpu : usr=50.06%, sys=45.54%, ctx=17, majf=0, minf=762 00:15:17.814 IO depths : 1=1.5%, 2=3.1%, 4=6.2%, 8=12.4%, 16=24.9%, 32=50.2%, >=64=1.6% 00:15:17.814 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:17.814 complete : 0=0.0%, 4=98.4%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.5%, >=64=0.0% 00:15:17.814 issued rwts: total=0,155203,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:17.814 latency : target=0, window=0, percentile=100.00%, depth=64 00:15:17.814 00:15:17.814 Run status group 0 (all jobs): 00:15:17.815 WRITE: bw=121MiB/s (127MB/s), 121MiB/s-121MiB/s (127MB/s-127MB/s), io=606MiB (636MB), run=5001-5001msec 00:15:17.815 ----------------------------------------------------- 00:15:17.815 Suppressions used: 00:15:17.815 count bytes template 00:15:17.815 1 11 /usr/src/fio/parse.c 00:15:17.815 1 8 libtcmalloc_minimal.so 00:15:17.815 1 904 libcrypto.so 00:15:17.815 ----------------------------------------------------- 00:15:17.815 00:15:18.077 00:15:18.077 real 0m13.962s 00:15:18.077 user 0m8.007s 00:15:18.077 sys 0m5.191s 00:15:18.077 ************************************ 00:15:18.077 END TEST xnvme_fio_plugin 00:15:18.077 ************************************ 00:15:18.077 04:32:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:18.077 04:32:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:15:18.077 04:32:14 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:15:18.077 04:32:14 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring_cmd 00:15:18.077 04:32:14 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/ng0n1 00:15:18.077 04:32:14 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/ng0n1 00:15:18.077 04:32:14 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:15:18.077 04:32:14 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:15:18.077 04:32:14 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:15:18.077 04:32:14 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:15:18.077 04:32:14 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:15:18.077 04:32:14 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:18.077 04:32:14 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:18.077 04:32:14 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:18.077 ************************************ 00:15:18.077 START TEST xnvme_rpc 00:15:18.077 ************************************ 00:15:18.077 04:32:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:15:18.077 04:32:14 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:15:18.077 04:32:14 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:15:18.077 04:32:14 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:15:18.077 04:32:14 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:15:18.077 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:18.077 04:32:14 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=70752 00:15:18.077 04:32:14 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 70752 00:15:18.077 04:32:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 70752 ']' 00:15:18.077 04:32:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:18.078 04:32:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:18.078 04:32:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:18.078 04:32:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:18.078 04:32:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:18.078 04:32:14 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:18.078 [2024-11-27 04:32:14.570685] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:15:18.078 [2024-11-27 04:32:14.571374] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70752 ] 00:15:18.339 [2024-11-27 04:32:14.736486] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:18.339 [2024-11-27 04:32:14.877813] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:19.286 04:32:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:19.286 04:32:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:15:19.286 04:32:15 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/ng0n1 xnvme_bdev io_uring_cmd '' 00:15:19.286 04:32:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.286 04:32:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:19.286 xnvme_bdev 00:15:19.286 04:32:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.286 04:32:15 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:15:19.286 04:32:15 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:15:19.286 04:32:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.286 04:32:15 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:15:19.286 04:32:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:19.286 04:32:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.286 04:32:15 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:15:19.286 04:32:15 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:15:19.286 04:32:15 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:15:19.286 04:32:15 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:15:19.286 04:32:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.286 04:32:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:19.286 04:32:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.286 04:32:15 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/ng0n1 == \/\d\e\v\/\n\g\0\n\1 ]] 00:15:19.286 04:32:15 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:15:19.286 04:32:15 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:15:19.286 04:32:15 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:15:19.286 04:32:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.286 04:32:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:19.286 04:32:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.287 04:32:15 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring_cmd == \i\o\_\u\r\i\n\g\_\c\m\d ]] 00:15:19.287 04:32:15 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:15:19.287 04:32:15 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:15:19.287 04:32:15 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:15:19.287 04:32:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.287 04:32:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:19.287 04:32:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.287 04:32:15 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:15:19.287 04:32:15 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:15:19.287 04:32:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.287 04:32:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:19.287 04:32:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.287 04:32:15 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 70752 00:15:19.287 04:32:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 70752 ']' 00:15:19.287 04:32:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 70752 00:15:19.287 04:32:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:15:19.287 04:32:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:19.287 04:32:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70752 00:15:19.287 killing process with pid 70752 00:15:19.287 04:32:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:19.287 04:32:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:19.287 04:32:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70752' 00:15:19.287 04:32:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 70752 00:15:19.287 04:32:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 70752 00:15:21.251 ************************************ 00:15:21.251 END TEST xnvme_rpc 00:15:21.251 ************************************ 00:15:21.251 00:15:21.251 real 0m3.007s 00:15:21.251 user 0m3.022s 00:15:21.251 sys 0m0.466s 00:15:21.251 04:32:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:21.251 04:32:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:21.251 04:32:17 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:15:21.251 04:32:17 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:21.251 04:32:17 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:21.251 04:32:17 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:21.251 ************************************ 00:15:21.251 START TEST xnvme_bdevperf 00:15:21.251 ************************************ 00:15:21.251 04:32:17 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:15:21.251 04:32:17 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:15:21.251 04:32:17 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring_cmd 00:15:21.251 04:32:17 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:21.251 04:32:17 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:15:21.251 04:32:17 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:15:21.251 04:32:17 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:15:21.251 04:32:17 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:21.251 { 00:15:21.251 "subsystems": [ 00:15:21.251 { 00:15:21.251 "subsystem": "bdev", 00:15:21.251 "config": [ 00:15:21.251 { 00:15:21.251 "params": { 00:15:21.251 "io_mechanism": "io_uring_cmd", 00:15:21.252 "conserve_cpu": false, 00:15:21.252 "filename": "/dev/ng0n1", 00:15:21.252 "name": "xnvme_bdev" 00:15:21.252 }, 00:15:21.252 "method": "bdev_xnvme_create" 00:15:21.252 }, 00:15:21.252 { 00:15:21.252 "method": "bdev_wait_for_examine" 00:15:21.252 } 00:15:21.252 ] 00:15:21.252 } 00:15:21.252 ] 00:15:21.252 } 00:15:21.252 [2024-11-27 04:32:17.638927] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:15:21.252 [2024-11-27 04:32:17.639084] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70822 ] 00:15:21.252 [2024-11-27 04:32:17.804151] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:21.512 [2024-11-27 04:32:17.941743] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:21.773 Running I/O for 5 seconds... 00:15:23.673 33498.00 IOPS, 130.85 MiB/s [2024-11-27T04:32:21.668Z] 33762.50 IOPS, 131.88 MiB/s [2024-11-27T04:32:22.259Z] 34133.00 IOPS, 133.33 MiB/s [2024-11-27T04:32:23.648Z] 33826.00 IOPS, 132.13 MiB/s 00:15:27.061 Latency(us) 00:15:27.061 [2024-11-27T04:32:23.648Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:27.061 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:15:27.061 xnvme_bdev : 5.00 33635.50 131.39 0.00 0.00 1898.15 387.54 11998.13 00:15:27.061 [2024-11-27T04:32:23.648Z] =================================================================================================================== 00:15:27.061 [2024-11-27T04:32:23.648Z] Total : 33635.50 131.39 0.00 0.00 1898.15 387.54 11998.13 00:15:27.633 04:32:24 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:27.633 04:32:24 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:15:27.633 04:32:24 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:15:27.633 04:32:24 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:15:27.633 04:32:24 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:27.633 { 00:15:27.633 "subsystems": [ 00:15:27.633 { 00:15:27.633 "subsystem": "bdev", 00:15:27.633 "config": [ 00:15:27.633 { 00:15:27.633 "params": { 00:15:27.633 "io_mechanism": "io_uring_cmd", 00:15:27.633 "conserve_cpu": false, 00:15:27.633 "filename": "/dev/ng0n1", 00:15:27.633 "name": "xnvme_bdev" 00:15:27.633 }, 00:15:27.633 "method": "bdev_xnvme_create" 00:15:27.633 }, 00:15:27.633 { 00:15:27.633 "method": "bdev_wait_for_examine" 00:15:27.633 } 00:15:27.633 ] 00:15:27.633 } 00:15:27.633 ] 00:15:27.633 } 00:15:27.633 [2024-11-27 04:32:24.138269] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:15:27.633 [2024-11-27 04:32:24.138423] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70899 ] 00:15:27.894 [2024-11-27 04:32:24.304442] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:27.895 [2024-11-27 04:32:24.443917] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:28.468 Running I/O for 5 seconds... 00:15:30.350 30259.00 IOPS, 118.20 MiB/s [2024-11-27T04:32:28.002Z] 31083.50 IOPS, 121.42 MiB/s [2024-11-27T04:32:28.944Z] 31610.33 IOPS, 123.48 MiB/s [2024-11-27T04:32:29.889Z] 31736.25 IOPS, 123.97 MiB/s 00:15:33.302 Latency(us) 00:15:33.302 [2024-11-27T04:32:29.889Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:33.302 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:15:33.302 xnvme_bdev : 5.00 31370.03 122.54 0.00 0.00 2034.35 212.68 10838.65 00:15:33.302 [2024-11-27T04:32:29.889Z] =================================================================================================================== 00:15:33.302 [2024-11-27T04:32:29.889Z] Total : 31370.03 122.54 0.00 0.00 2034.35 212.68 10838.65 00:15:34.248 04:32:30 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:34.248 04:32:30 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w unmap -t 5 -T xnvme_bdev -o 4096 00:15:34.248 04:32:30 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:15:34.248 04:32:30 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:15:34.248 04:32:30 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:34.248 { 00:15:34.248 "subsystems": [ 00:15:34.248 { 00:15:34.248 "subsystem": "bdev", 00:15:34.248 "config": [ 00:15:34.248 { 00:15:34.248 "params": { 00:15:34.248 "io_mechanism": "io_uring_cmd", 00:15:34.248 "conserve_cpu": false, 00:15:34.248 "filename": "/dev/ng0n1", 00:15:34.248 "name": "xnvme_bdev" 00:15:34.248 }, 00:15:34.248 "method": "bdev_xnvme_create" 00:15:34.248 }, 00:15:34.248 { 00:15:34.248 "method": "bdev_wait_for_examine" 00:15:34.248 } 00:15:34.248 ] 00:15:34.248 } 00:15:34.248 ] 00:15:34.248 } 00:15:34.248 [2024-11-27 04:32:30.685176] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:15:34.248 [2024-11-27 04:32:30.685630] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70979 ] 00:15:34.511 [2024-11-27 04:32:30.856311] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:34.511 [2024-11-27 04:32:30.996505] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:34.774 Running I/O for 5 seconds... 00:15:37.108 71616.00 IOPS, 279.75 MiB/s [2024-11-27T04:32:34.643Z] 72096.00 IOPS, 281.62 MiB/s [2024-11-27T04:32:35.589Z] 72768.00 IOPS, 284.25 MiB/s [2024-11-27T04:32:36.531Z] 73184.00 IOPS, 285.88 MiB/s [2024-11-27T04:32:36.531Z] 73408.00 IOPS, 286.75 MiB/s 00:15:39.944 Latency(us) 00:15:39.944 [2024-11-27T04:32:36.531Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:39.944 Job: xnvme_bdev (Core Mask 0x1, workload: unmap, depth: 64, IO size: 4096) 00:15:39.944 xnvme_bdev : 5.00 73389.89 286.68 0.00 0.00 868.63 529.33 2520.62 00:15:39.944 [2024-11-27T04:32:36.531Z] =================================================================================================================== 00:15:39.944 [2024-11-27T04:32:36.531Z] Total : 73389.89 286.68 0.00 0.00 868.63 529.33 2520.62 00:15:40.888 04:32:37 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:40.888 04:32:37 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w write_zeroes -t 5 -T xnvme_bdev -o 4096 00:15:40.888 04:32:37 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:15:40.888 04:32:37 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:15:40.888 04:32:37 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:40.888 { 00:15:40.888 "subsystems": [ 00:15:40.888 { 00:15:40.888 "subsystem": "bdev", 00:15:40.888 "config": [ 00:15:40.888 { 00:15:40.888 "params": { 00:15:40.888 "io_mechanism": "io_uring_cmd", 00:15:40.888 "conserve_cpu": false, 00:15:40.888 "filename": "/dev/ng0n1", 00:15:40.888 "name": "xnvme_bdev" 00:15:40.888 }, 00:15:40.888 "method": "bdev_xnvme_create" 00:15:40.888 }, 00:15:40.888 { 00:15:40.888 "method": "bdev_wait_for_examine" 00:15:40.888 } 00:15:40.888 ] 00:15:40.888 } 00:15:40.888 ] 00:15:40.888 } 00:15:40.888 [2024-11-27 04:32:37.199407] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:15:40.889 [2024-11-27 04:32:37.199570] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71053 ] 00:15:40.889 [2024-11-27 04:32:37.367878] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:41.149 [2024-11-27 04:32:37.508045] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:41.410 Running I/O for 5 seconds... 00:15:43.297 173.00 IOPS, 0.68 MiB/s [2024-11-27T04:32:40.828Z] 166.50 IOPS, 0.65 MiB/s [2024-11-27T04:32:42.215Z] 172.33 IOPS, 0.67 MiB/s [2024-11-27T04:32:43.156Z] 241.50 IOPS, 0.94 MiB/s [2024-11-27T04:32:43.418Z] 228.60 IOPS, 0.89 MiB/s 00:15:46.831 Latency(us) 00:15:46.831 [2024-11-27T04:32:43.418Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:46.831 Job: xnvme_bdev (Core Mask 0x1, workload: write_zeroes, depth: 64, IO size: 4096) 00:15:46.831 xnvme_bdev : 5.40 223.37 0.87 0.00 0.00 275611.17 283.57 738842.78 00:15:46.831 [2024-11-27T04:32:43.418Z] =================================================================================================================== 00:15:46.831 [2024-11-27T04:32:43.418Z] Total : 223.37 0.87 0.00 0.00 275611.17 283.57 738842.78 00:15:47.779 ************************************ 00:15:47.779 END TEST xnvme_bdevperf 00:15:47.779 ************************************ 00:15:47.779 00:15:47.779 real 0m26.478s 00:15:47.779 user 0m15.422s 00:15:47.779 sys 0m10.517s 00:15:47.779 04:32:44 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:47.779 04:32:44 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:47.779 04:32:44 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:15:47.779 04:32:44 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:47.779 04:32:44 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:47.779 04:32:44 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:47.779 ************************************ 00:15:47.779 START TEST xnvme_fio_plugin 00:15:47.779 ************************************ 00:15:47.779 04:32:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:15:47.779 04:32:44 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:15:47.779 04:32:44 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_cmd_fio 00:15:47.779 04:32:44 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:47.779 04:32:44 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:47.779 04:32:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:47.779 04:32:44 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:15:47.779 04:32:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:15:47.779 04:32:44 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:15:47.779 04:32:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:47.779 04:32:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:15:47.779 04:32:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:15:47.779 04:32:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:47.779 04:32:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:15:47.779 04:32:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:15:47.779 04:32:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:15:47.779 04:32:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:15:47.779 04:32:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:47.779 04:32:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:15:47.779 04:32:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:47.779 04:32:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:47.779 04:32:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:15:47.779 04:32:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:47.779 04:32:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:47.779 { 00:15:47.779 "subsystems": [ 00:15:47.779 { 00:15:47.779 "subsystem": "bdev", 00:15:47.779 "config": [ 00:15:47.779 { 00:15:47.779 "params": { 00:15:47.779 "io_mechanism": "io_uring_cmd", 00:15:47.780 "conserve_cpu": false, 00:15:47.780 "filename": "/dev/ng0n1", 00:15:47.780 "name": "xnvme_bdev" 00:15:47.780 }, 00:15:47.780 "method": "bdev_xnvme_create" 00:15:47.780 }, 00:15:47.780 { 00:15:47.780 "method": "bdev_wait_for_examine" 00:15:47.780 } 00:15:47.780 ] 00:15:47.780 } 00:15:47.780 ] 00:15:47.780 } 00:15:47.780 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:15:47.780 fio-3.35 00:15:47.780 Starting 1 thread 00:15:54.369 00:15:54.369 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71177: Wed Nov 27 04:32:50 2024 00:15:54.369 read: IOPS=33.1k, BW=129MiB/s (136MB/s)(647MiB/5002msec) 00:15:54.369 slat (nsec): min=2781, max=86149, avg=4728.12, stdev=2814.05 00:15:54.369 clat (usec): min=945, max=4426, avg=1737.73, stdev=345.52 00:15:54.369 lat (usec): min=948, max=4439, avg=1742.46, stdev=346.22 00:15:54.369 clat percentiles (usec): 00:15:54.369 | 1.00th=[ 1139], 5.00th=[ 1270], 10.00th=[ 1336], 20.00th=[ 1450], 00:15:54.369 | 30.00th=[ 1532], 40.00th=[ 1614], 50.00th=[ 1696], 60.00th=[ 1778], 00:15:54.369 | 70.00th=[ 1876], 80.00th=[ 2008], 90.00th=[ 2180], 95.00th=[ 2376], 00:15:54.369 | 99.00th=[ 2737], 99.50th=[ 2900], 99.90th=[ 3523], 99.95th=[ 4113], 00:15:54.369 | 99.99th=[ 4359] 00:15:54.369 bw ( KiB/s): min=126976, max=139264, per=99.99%, avg=132437.33, stdev=4411.82, samples=9 00:15:54.369 iops : min=31744, max=34816, avg=33109.33, stdev=1102.96, samples=9 00:15:54.369 lat (usec) : 1000=0.03% 00:15:54.369 lat (msec) : 2=79.96%, 4=19.95%, 10=0.06% 00:15:54.369 cpu : usr=37.83%, sys=60.71%, ctx=7, majf=0, minf=762 00:15:54.369 IO depths : 1=1.6%, 2=3.1%, 4=6.3%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:15:54.369 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:54.369 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:15:54.369 issued rwts: total=165625,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:54.369 latency : target=0, window=0, percentile=100.00%, depth=64 00:15:54.369 00:15:54.369 Run status group 0 (all jobs): 00:15:54.369 READ: bw=129MiB/s (136MB/s), 129MiB/s-129MiB/s (136MB/s-136MB/s), io=647MiB (678MB), run=5002-5002msec 00:15:54.631 ----------------------------------------------------- 00:15:54.631 Suppressions used: 00:15:54.631 count bytes template 00:15:54.631 1 11 /usr/src/fio/parse.c 00:15:54.631 1 8 libtcmalloc_minimal.so 00:15:54.631 1 904 libcrypto.so 00:15:54.631 ----------------------------------------------------- 00:15:54.631 00:15:54.631 04:32:51 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:54.631 04:32:51 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:54.631 04:32:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:54.631 04:32:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:15:54.631 04:32:51 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:15:54.631 04:32:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:54.631 04:32:51 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:15:54.631 04:32:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:15:54.631 04:32:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:54.631 04:32:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:15:54.631 04:32:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:15:54.631 04:32:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:15:54.631 04:32:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:15:54.631 04:32:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:54.631 04:32:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:15:54.631 04:32:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:15:54.631 04:32:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:54.631 04:32:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:54.631 04:32:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:15:54.631 04:32:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:54.631 04:32:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:54.631 { 00:15:54.631 "subsystems": [ 00:15:54.631 { 00:15:54.631 "subsystem": "bdev", 00:15:54.631 "config": [ 00:15:54.631 { 00:15:54.631 "params": { 00:15:54.631 "io_mechanism": "io_uring_cmd", 00:15:54.631 "conserve_cpu": false, 00:15:54.631 "filename": "/dev/ng0n1", 00:15:54.631 "name": "xnvme_bdev" 00:15:54.631 }, 00:15:54.631 "method": "bdev_xnvme_create" 00:15:54.631 }, 00:15:54.631 { 00:15:54.631 "method": "bdev_wait_for_examine" 00:15:54.631 } 00:15:54.631 ] 00:15:54.631 } 00:15:54.631 ] 00:15:54.631 } 00:15:54.893 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:15:54.893 fio-3.35 00:15:54.893 Starting 1 thread 00:16:01.522 00:16:01.522 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71262: Wed Nov 27 04:32:57 2024 00:16:01.522 write: IOPS=25.8k, BW=101MiB/s (106MB/s)(504MiB/5001msec); 0 zone resets 00:16:01.522 slat (usec): min=2, max=119, avg= 4.77, stdev= 3.36 00:16:01.522 clat (usec): min=126, max=224489, avg=2296.26, stdev=6788.22 00:16:01.522 lat (usec): min=134, max=224494, avg=2301.02, stdev=6788.28 00:16:01.522 clat percentiles (usec): 00:16:01.522 | 1.00th=[ 938], 5.00th=[ 1254], 10.00th=[ 1369], 20.00th=[ 1500], 00:16:01.522 | 30.00th=[ 1614], 40.00th=[ 1729], 50.00th=[ 1827], 60.00th=[ 1926], 00:16:01.522 | 70.00th=[ 2040], 80.00th=[ 2180], 90.00th=[ 2474], 95.00th=[ 2769], 00:16:01.522 | 99.00th=[ 7046], 99.50th=[ 17433], 99.90th=[100140], 99.95th=[131597], 00:16:01.522 | 99.99th=[223347] 00:16:01.522 bw ( KiB/s): min=63592, max=134416, per=100.00%, avg=111841.89, stdev=20243.53, samples=9 00:16:01.522 iops : min=15898, max=33604, avg=27960.33, stdev=5060.90, samples=9 00:16:01.522 lat (usec) : 250=0.02%, 500=0.18%, 750=0.27%, 1000=0.81% 00:16:01.522 lat (msec) : 2=65.34%, 4=31.75%, 10=0.90%, 20=0.32%, 50=0.11% 00:16:01.522 lat (msec) : 100=0.19%, 250=0.11% 00:16:01.522 cpu : usr=39.34%, sys=59.46%, ctx=11, majf=0, minf=762 00:16:01.522 IO depths : 1=1.4%, 2=2.8%, 4=5.7%, 8=11.7%, 16=24.2%, 32=52.4%, >=64=1.8% 00:16:01.522 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:01.522 complete : 0=0.0%, 4=98.3%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.5%, >=64=0.0% 00:16:01.522 issued rwts: total=0,129026,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:01.522 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:01.522 00:16:01.522 Run status group 0 (all jobs): 00:16:01.522 WRITE: bw=101MiB/s (106MB/s), 101MiB/s-101MiB/s (106MB/s-106MB/s), io=504MiB (528MB), run=5001-5001msec 00:16:01.522 ----------------------------------------------------- 00:16:01.522 Suppressions used: 00:16:01.522 count bytes template 00:16:01.522 1 11 /usr/src/fio/parse.c 00:16:01.522 1 8 libtcmalloc_minimal.so 00:16:01.522 1 904 libcrypto.so 00:16:01.522 ----------------------------------------------------- 00:16:01.522 00:16:01.522 ************************************ 00:16:01.522 END TEST xnvme_fio_plugin 00:16:01.522 ************************************ 00:16:01.522 00:16:01.522 real 0m13.966s 00:16:01.522 user 0m6.877s 00:16:01.522 sys 0m6.622s 00:16:01.522 04:32:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:01.522 04:32:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:16:01.784 04:32:58 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:16:01.784 04:32:58 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:16:01.784 04:32:58 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:16:01.784 04:32:58 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:16:01.784 04:32:58 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:01.784 04:32:58 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:01.784 04:32:58 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:01.784 ************************************ 00:16:01.784 START TEST xnvme_rpc 00:16:01.784 ************************************ 00:16:01.784 04:32:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:16:01.784 04:32:58 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:16:01.784 04:32:58 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:16:01.784 04:32:58 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:16:01.784 04:32:58 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:16:01.784 04:32:58 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=71347 00:16:01.784 04:32:58 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 71347 00:16:01.784 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:01.784 04:32:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 71347 ']' 00:16:01.784 04:32:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:01.784 04:32:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:01.784 04:32:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:01.784 04:32:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:01.784 04:32:58 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:01.784 04:32:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:01.784 [2024-11-27 04:32:58.227499] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:16:01.784 [2024-11-27 04:32:58.227839] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71347 ] 00:16:02.045 [2024-11-27 04:32:58.392836] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:02.045 [2024-11-27 04:32:58.526746] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:02.989 04:32:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:02.989 04:32:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:16:02.989 04:32:59 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/ng0n1 xnvme_bdev io_uring_cmd -c 00:16:02.989 04:32:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.989 04:32:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:02.989 xnvme_bdev 00:16:02.989 04:32:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.989 04:32:59 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:16:02.989 04:32:59 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:16:02.989 04:32:59 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:16:02.989 04:32:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.989 04:32:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:02.989 04:32:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.989 04:32:59 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:16:02.989 04:32:59 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:16:02.989 04:32:59 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:16:02.989 04:32:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.989 04:32:59 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:16:02.989 04:32:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:02.989 04:32:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.989 04:32:59 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/ng0n1 == \/\d\e\v\/\n\g\0\n\1 ]] 00:16:02.989 04:32:59 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:16:02.989 04:32:59 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:16:02.989 04:32:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.989 04:32:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:02.989 04:32:59 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:16:02.989 04:32:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.989 04:32:59 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring_cmd == \i\o\_\u\r\i\n\g\_\c\m\d ]] 00:16:02.989 04:32:59 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:16:02.989 04:32:59 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:16:02.989 04:32:59 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:16:02.989 04:32:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.989 04:32:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:02.989 04:32:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.990 04:32:59 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:16:02.990 04:32:59 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:16:02.990 04:32:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.990 04:32:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:02.990 04:32:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.990 04:32:59 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 71347 00:16:02.990 04:32:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 71347 ']' 00:16:02.990 04:32:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 71347 00:16:02.990 04:32:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:16:02.990 04:32:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:02.990 04:32:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71347 00:16:02.990 04:32:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:02.990 killing process with pid 71347 00:16:02.990 04:32:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:02.990 04:32:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71347' 00:16:02.990 04:32:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 71347 00:16:02.990 04:32:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 71347 00:16:04.902 ************************************ 00:16:04.902 END TEST xnvme_rpc 00:16:04.902 ************************************ 00:16:04.902 00:16:04.902 real 0m3.054s 00:16:04.902 user 0m3.131s 00:16:04.902 sys 0m0.521s 00:16:04.902 04:33:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:04.902 04:33:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:04.902 04:33:01 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:16:04.902 04:33:01 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:04.902 04:33:01 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:04.902 04:33:01 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:04.902 ************************************ 00:16:04.902 START TEST xnvme_bdevperf 00:16:04.902 ************************************ 00:16:04.902 04:33:01 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:16:04.902 04:33:01 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:16:04.902 04:33:01 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring_cmd 00:16:04.902 04:33:01 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:16:04.902 04:33:01 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:16:04.903 04:33:01 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:16:04.903 04:33:01 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:16:04.903 04:33:01 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:16:04.903 { 00:16:04.903 "subsystems": [ 00:16:04.903 { 00:16:04.903 "subsystem": "bdev", 00:16:04.903 "config": [ 00:16:04.903 { 00:16:04.903 "params": { 00:16:04.903 "io_mechanism": "io_uring_cmd", 00:16:04.903 "conserve_cpu": true, 00:16:04.903 "filename": "/dev/ng0n1", 00:16:04.903 "name": "xnvme_bdev" 00:16:04.903 }, 00:16:04.903 "method": "bdev_xnvme_create" 00:16:04.903 }, 00:16:04.903 { 00:16:04.903 "method": "bdev_wait_for_examine" 00:16:04.903 } 00:16:04.903 ] 00:16:04.903 } 00:16:04.903 ] 00:16:04.903 } 00:16:04.903 [2024-11-27 04:33:01.336128] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:16:04.903 [2024-11-27 04:33:01.336291] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71420 ] 00:16:05.163 [2024-11-27 04:33:01.502003] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:05.163 [2024-11-27 04:33:01.646080] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:05.424 Running I/O for 5 seconds... 00:16:07.447 33221.00 IOPS, 129.77 MiB/s [2024-11-27T04:33:04.999Z] 33012.50 IOPS, 128.96 MiB/s [2024-11-27T04:33:06.383Z] 32714.00 IOPS, 127.79 MiB/s [2024-11-27T04:33:07.323Z] 33135.00 IOPS, 129.43 MiB/s [2024-11-27T04:33:07.323Z] 33330.00 IOPS, 130.20 MiB/s 00:16:10.736 Latency(us) 00:16:10.736 [2024-11-27T04:33:07.323Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:10.736 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:16:10.736 xnvme_bdev : 5.01 33286.04 130.02 0.00 0.00 1918.02 857.01 77030.01 00:16:10.736 [2024-11-27T04:33:07.323Z] =================================================================================================================== 00:16:10.736 [2024-11-27T04:33:07.323Z] Total : 33286.04 130.02 0.00 0.00 1918.02 857.01 77030.01 00:16:11.308 04:33:07 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:16:11.308 04:33:07 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:16:11.308 04:33:07 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:16:11.308 04:33:07 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:16:11.308 04:33:07 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:16:11.308 { 00:16:11.308 "subsystems": [ 00:16:11.308 { 00:16:11.308 "subsystem": "bdev", 00:16:11.308 "config": [ 00:16:11.308 { 00:16:11.308 "params": { 00:16:11.308 "io_mechanism": "io_uring_cmd", 00:16:11.308 "conserve_cpu": true, 00:16:11.308 "filename": "/dev/ng0n1", 00:16:11.308 "name": "xnvme_bdev" 00:16:11.308 }, 00:16:11.308 "method": "bdev_xnvme_create" 00:16:11.308 }, 00:16:11.308 { 00:16:11.308 "method": "bdev_wait_for_examine" 00:16:11.308 } 00:16:11.308 ] 00:16:11.308 } 00:16:11.308 ] 00:16:11.308 } 00:16:11.308 [2024-11-27 04:33:07.856166] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:16:11.308 [2024-11-27 04:33:07.856542] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71496 ] 00:16:11.570 [2024-11-27 04:33:08.021799] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:11.867 [2024-11-27 04:33:08.160896] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:12.129 Running I/O for 5 seconds... 00:16:14.023 31670.00 IOPS, 123.71 MiB/s [2024-11-27T04:33:11.553Z] 32480.00 IOPS, 126.88 MiB/s [2024-11-27T04:33:12.507Z] 32919.33 IOPS, 128.59 MiB/s [2024-11-27T04:33:13.906Z] 32996.50 IOPS, 128.89 MiB/s 00:16:17.319 Latency(us) 00:16:17.319 [2024-11-27T04:33:13.906Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:17.319 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:16:17.319 xnvme_bdev : 5.00 33163.67 129.55 0.00 0.00 1925.07 103.98 18551.73 00:16:17.319 [2024-11-27T04:33:13.906Z] =================================================================================================================== 00:16:17.319 [2024-11-27T04:33:13.906Z] Total : 33163.67 129.55 0.00 0.00 1925.07 103.98 18551.73 00:16:17.892 04:33:14 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:16:17.892 04:33:14 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w unmap -t 5 -T xnvme_bdev -o 4096 00:16:17.892 04:33:14 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:16:17.892 04:33:14 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:16:17.892 04:33:14 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:16:17.892 { 00:16:17.892 "subsystems": [ 00:16:17.892 { 00:16:17.892 "subsystem": "bdev", 00:16:17.892 "config": [ 00:16:17.892 { 00:16:17.892 "params": { 00:16:17.892 "io_mechanism": "io_uring_cmd", 00:16:17.892 "conserve_cpu": true, 00:16:17.892 "filename": "/dev/ng0n1", 00:16:17.892 "name": "xnvme_bdev" 00:16:17.892 }, 00:16:17.892 "method": "bdev_xnvme_create" 00:16:17.892 }, 00:16:17.892 { 00:16:17.892 "method": "bdev_wait_for_examine" 00:16:17.892 } 00:16:17.892 ] 00:16:17.892 } 00:16:17.892 ] 00:16:17.892 } 00:16:17.892 [2024-11-27 04:33:14.369675] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:16:17.892 [2024-11-27 04:33:14.370330] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71570 ] 00:16:18.153 [2024-11-27 04:33:14.552596] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:18.153 [2024-11-27 04:33:14.696464] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:18.724 Running I/O for 5 seconds... 00:16:20.607 72832.00 IOPS, 284.50 MiB/s [2024-11-27T04:33:18.230Z] 73920.00 IOPS, 288.75 MiB/s [2024-11-27T04:33:19.164Z] 75221.33 IOPS, 293.83 MiB/s [2024-11-27T04:33:20.097Z] 76304.00 IOPS, 298.06 MiB/s 00:16:23.510 Latency(us) 00:16:23.510 [2024-11-27T04:33:20.097Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:23.510 Job: xnvme_bdev (Core Mask 0x1, workload: unmap, depth: 64, IO size: 4096) 00:16:23.510 xnvme_bdev : 5.00 76498.35 298.82 0.00 0.00 833.17 403.30 3982.57 00:16:23.510 [2024-11-27T04:33:20.097Z] =================================================================================================================== 00:16:23.510 [2024-11-27T04:33:20.097Z] Total : 76498.35 298.82 0.00 0.00 833.17 403.30 3982.57 00:16:24.443 04:33:20 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:16:24.443 04:33:20 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w write_zeroes -t 5 -T xnvme_bdev -o 4096 00:16:24.443 04:33:20 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:16:24.443 04:33:20 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:16:24.443 04:33:20 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:16:24.443 { 00:16:24.443 "subsystems": [ 00:16:24.443 { 00:16:24.443 "subsystem": "bdev", 00:16:24.443 "config": [ 00:16:24.443 { 00:16:24.443 "params": { 00:16:24.443 "io_mechanism": "io_uring_cmd", 00:16:24.443 "conserve_cpu": true, 00:16:24.443 "filename": "/dev/ng0n1", 00:16:24.443 "name": "xnvme_bdev" 00:16:24.443 }, 00:16:24.443 "method": "bdev_xnvme_create" 00:16:24.443 }, 00:16:24.443 { 00:16:24.443 "method": "bdev_wait_for_examine" 00:16:24.443 } 00:16:24.443 ] 00:16:24.443 } 00:16:24.443 ] 00:16:24.443 } 00:16:24.443 [2024-11-27 04:33:20.805322] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:16:24.443 [2024-11-27 04:33:20.805450] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71644 ] 00:16:24.443 [2024-11-27 04:33:20.965358] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:24.702 [2024-11-27 04:33:21.068660] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:24.959 Running I/O for 5 seconds... 00:16:26.824 3285.00 IOPS, 12.83 MiB/s [2024-11-27T04:33:24.345Z] 2485.00 IOPS, 9.71 MiB/s [2024-11-27T04:33:25.716Z] 1710.00 IOPS, 6.68 MiB/s [2024-11-27T04:33:26.651Z] 1328.25 IOPS, 5.19 MiB/s [2024-11-27T04:33:26.909Z] 1100.40 IOPS, 4.30 MiB/s 00:16:30.322 Latency(us) 00:16:30.322 [2024-11-27T04:33:26.909Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:30.322 Job: xnvme_bdev (Core Mask 0x1, workload: write_zeroes, depth: 64, IO size: 4096) 00:16:30.322 xnvme_bdev : 5.37 1037.02 4.05 0.00 0.00 59727.69 57.50 916294.10 00:16:30.322 [2024-11-27T04:33:26.909Z] =================================================================================================================== 00:16:30.322 [2024-11-27T04:33:26.909Z] Total : 1037.02 4.05 0.00 0.00 59727.69 57.50 916294.10 00:16:30.888 00:16:30.888 real 0m26.153s 00:16:30.888 user 0m20.186s 00:16:30.888 sys 0m4.990s 00:16:30.888 ************************************ 00:16:30.888 END TEST xnvme_bdevperf 00:16:30.888 ************************************ 00:16:30.888 04:33:27 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:30.888 04:33:27 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:16:30.888 04:33:27 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:16:30.888 04:33:27 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:30.888 04:33:27 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:30.888 04:33:27 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:31.146 ************************************ 00:16:31.146 START TEST xnvme_fio_plugin 00:16:31.146 ************************************ 00:16:31.146 04:33:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:16:31.146 04:33:27 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:16:31.146 04:33:27 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_cmd_fio 00:16:31.146 04:33:27 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:16:31.146 04:33:27 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:16:31.146 04:33:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:16:31.146 04:33:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:16:31.146 04:33:27 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:16:31.146 04:33:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:16:31.146 04:33:27 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:16:31.146 04:33:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:16:31.146 04:33:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:16:31.146 04:33:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:31.146 04:33:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:16:31.146 04:33:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:16:31.146 04:33:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:16:31.146 04:33:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:16:31.146 04:33:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:16:31.146 04:33:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:31.146 04:33:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:16:31.146 04:33:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:16:31.146 04:33:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:16:31.146 04:33:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:16:31.146 04:33:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:16:31.146 { 00:16:31.146 "subsystems": [ 00:16:31.146 { 00:16:31.146 "subsystem": "bdev", 00:16:31.146 "config": [ 00:16:31.146 { 00:16:31.146 "params": { 00:16:31.146 "io_mechanism": "io_uring_cmd", 00:16:31.146 "conserve_cpu": true, 00:16:31.146 "filename": "/dev/ng0n1", 00:16:31.146 "name": "xnvme_bdev" 00:16:31.146 }, 00:16:31.147 "method": "bdev_xnvme_create" 00:16:31.147 }, 00:16:31.147 { 00:16:31.147 "method": "bdev_wait_for_examine" 00:16:31.147 } 00:16:31.147 ] 00:16:31.147 } 00:16:31.147 ] 00:16:31.147 } 00:16:31.147 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:16:31.147 fio-3.35 00:16:31.147 Starting 1 thread 00:16:37.701 00:16:37.701 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71768: Wed Nov 27 04:33:33 2024 00:16:37.701 read: IOPS=39.4k, BW=154MiB/s (161MB/s)(769MiB/5001msec) 00:16:37.701 slat (nsec): min=2805, max=70097, avg=4162.08, stdev=2500.44 00:16:37.701 clat (usec): min=776, max=4694, avg=1458.50, stdev=302.38 00:16:37.701 lat (usec): min=779, max=4720, avg=1462.66, stdev=303.19 00:16:37.701 clat percentiles (usec): 00:16:37.701 | 1.00th=[ 930], 5.00th=[ 1037], 10.00th=[ 1106], 20.00th=[ 1205], 00:16:37.701 | 30.00th=[ 1287], 40.00th=[ 1352], 50.00th=[ 1418], 60.00th=[ 1500], 00:16:37.701 | 70.00th=[ 1582], 80.00th=[ 1680], 90.00th=[ 1860], 95.00th=[ 2008], 00:16:37.701 | 99.00th=[ 2311], 99.50th=[ 2442], 99.90th=[ 2835], 99.95th=[ 3261], 00:16:37.701 | 99.99th=[ 4555] 00:16:37.701 bw ( KiB/s): min=147968, max=166912, per=100.00%, avg=157525.33, stdev=6154.66, samples=9 00:16:37.701 iops : min=36992, max=41728, avg=39381.33, stdev=1538.66, samples=9 00:16:37.701 lat (usec) : 1000=3.22% 00:16:37.701 lat (msec) : 2=91.65%, 4=5.10%, 10=0.03% 00:16:37.701 cpu : usr=55.96%, sys=40.98%, ctx=14, majf=0, minf=762 00:16:37.701 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:16:37.701 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:37.701 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:16:37.701 issued rwts: total=196800,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:37.701 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:37.701 00:16:37.701 Run status group 0 (all jobs): 00:16:37.701 READ: bw=154MiB/s (161MB/s), 154MiB/s-154MiB/s (161MB/s-161MB/s), io=769MiB (806MB), run=5001-5001msec 00:16:37.701 ----------------------------------------------------- 00:16:37.701 Suppressions used: 00:16:37.701 count bytes template 00:16:37.701 1 11 /usr/src/fio/parse.c 00:16:37.701 1 8 libtcmalloc_minimal.so 00:16:37.701 1 904 libcrypto.so 00:16:37.701 ----------------------------------------------------- 00:16:37.701 00:16:37.701 04:33:34 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:16:37.701 04:33:34 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:16:37.701 04:33:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:16:37.701 04:33:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:16:37.701 04:33:34 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:16:37.701 04:33:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:16:37.701 04:33:34 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:16:37.701 04:33:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:16:37.701 04:33:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:37.701 04:33:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:16:37.701 04:33:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:16:37.701 04:33:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:16:37.701 04:33:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:16:37.701 04:33:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:37.701 04:33:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:16:37.701 04:33:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:16:37.960 04:33:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:16:37.960 04:33:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:16:37.960 04:33:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:16:37.960 04:33:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:16:37.960 04:33:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:16:37.960 { 00:16:37.960 "subsystems": [ 00:16:37.960 { 00:16:37.960 "subsystem": "bdev", 00:16:37.960 "config": [ 00:16:37.960 { 00:16:37.960 "params": { 00:16:37.960 "io_mechanism": "io_uring_cmd", 00:16:37.960 "conserve_cpu": true, 00:16:37.960 "filename": "/dev/ng0n1", 00:16:37.960 "name": "xnvme_bdev" 00:16:37.960 }, 00:16:37.960 "method": "bdev_xnvme_create" 00:16:37.960 }, 00:16:37.960 { 00:16:37.960 "method": "bdev_wait_for_examine" 00:16:37.960 } 00:16:37.960 ] 00:16:37.960 } 00:16:37.960 ] 00:16:37.960 } 00:16:37.960 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:16:37.960 fio-3.35 00:16:37.960 Starting 1 thread 00:16:44.568 00:16:44.568 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71853: Wed Nov 27 04:33:40 2024 00:16:44.568 write: IOPS=33.3k, BW=130MiB/s (136MB/s)(651MiB/5001msec); 0 zone resets 00:16:44.568 slat (usec): min=2, max=344, avg= 3.67, stdev= 3.08 00:16:44.568 clat (usec): min=73, max=15671, avg=1808.05, stdev=1635.96 00:16:44.568 lat (usec): min=76, max=15674, avg=1811.72, stdev=1635.99 00:16:44.568 clat percentiles (usec): 00:16:44.568 | 1.00th=[ 441], 5.00th=[ 758], 10.00th=[ 865], 20.00th=[ 1045], 00:16:44.568 | 30.00th=[ 1172], 40.00th=[ 1270], 50.00th=[ 1336], 60.00th=[ 1434], 00:16:44.568 | 70.00th=[ 1549], 80.00th=[ 1762], 90.00th=[ 3228], 95.00th=[ 5800], 00:16:44.568 | 99.00th=[ 8979], 99.50th=[10159], 99.90th=[11994], 99.95th=[12649], 00:16:44.568 | 99.99th=[14353] 00:16:44.568 bw ( KiB/s): min=125456, max=143840, per=100.00%, avg=133480.00, stdev=7239.77, samples=9 00:16:44.568 iops : min=31364, max=35960, avg=33370.00, stdev=1809.94, samples=9 00:16:44.568 lat (usec) : 100=0.02%, 250=0.11%, 500=1.19%, 750=3.53%, 1000=12.78% 00:16:44.568 lat (msec) : 2=67.82%, 4=6.13%, 10=7.89%, 20=0.54% 00:16:44.568 cpu : usr=76.80%, sys=15.28%, ctx=8, majf=0, minf=762 00:16:44.568 IO depths : 1=0.7%, 2=1.5%, 4=3.3%, 8=7.6%, 16=19.8%, 32=63.9%, >=64=3.2% 00:16:44.568 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:44.568 complete : 0=0.0%, 4=97.6%, 8=0.2%, 16=0.3%, 32=0.4%, 64=1.4%, >=64=0.0% 00:16:44.568 issued rwts: total=0,166585,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:44.568 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:44.568 00:16:44.568 Run status group 0 (all jobs): 00:16:44.568 WRITE: bw=130MiB/s (136MB/s), 130MiB/s-130MiB/s (136MB/s-136MB/s), io=651MiB (682MB), run=5001-5001msec 00:16:44.568 ----------------------------------------------------- 00:16:44.568 Suppressions used: 00:16:44.568 count bytes template 00:16:44.568 1 11 /usr/src/fio/parse.c 00:16:44.568 1 8 libtcmalloc_minimal.so 00:16:44.568 1 904 libcrypto.so 00:16:44.568 ----------------------------------------------------- 00:16:44.568 00:16:44.568 00:16:44.568 real 0m13.544s 00:16:44.568 user 0m9.305s 00:16:44.568 sys 0m3.360s 00:16:44.568 04:33:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:44.568 ************************************ 00:16:44.568 END TEST xnvme_fio_plugin 00:16:44.568 ************************************ 00:16:44.568 04:33:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:16:44.568 Process with pid 71347 is not found 00:16:44.568 04:33:41 nvme_xnvme -- xnvme/xnvme.sh@1 -- # killprocess 71347 00:16:44.568 04:33:41 nvme_xnvme -- common/autotest_common.sh@954 -- # '[' -z 71347 ']' 00:16:44.568 04:33:41 nvme_xnvme -- common/autotest_common.sh@958 -- # kill -0 71347 00:16:44.568 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (71347) - No such process 00:16:44.568 04:33:41 nvme_xnvme -- common/autotest_common.sh@981 -- # echo 'Process with pid 71347 is not found' 00:16:44.568 00:16:44.568 real 3m30.273s 00:16:44.568 user 2m1.386s 00:16:44.568 sys 1m12.703s 00:16:44.568 04:33:41 nvme_xnvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:44.568 04:33:41 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:44.568 ************************************ 00:16:44.568 END TEST nvme_xnvme 00:16:44.568 ************************************ 00:16:44.568 04:33:41 -- spdk/autotest.sh@245 -- # run_test blockdev_xnvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:16:44.568 04:33:41 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:44.568 04:33:41 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:44.568 04:33:41 -- common/autotest_common.sh@10 -- # set +x 00:16:44.568 ************************************ 00:16:44.568 START TEST blockdev_xnvme 00:16:44.568 ************************************ 00:16:44.568 04:33:41 blockdev_xnvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:16:44.827 * Looking for test storage... 00:16:44.827 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:16:44.827 04:33:41 blockdev_xnvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:44.827 04:33:41 blockdev_xnvme -- common/autotest_common.sh@1693 -- # lcov --version 00:16:44.827 04:33:41 blockdev_xnvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:44.827 04:33:41 blockdev_xnvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:44.827 04:33:41 blockdev_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:44.827 04:33:41 blockdev_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:44.827 04:33:41 blockdev_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:44.827 04:33:41 blockdev_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:16:44.827 04:33:41 blockdev_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:16:44.827 04:33:41 blockdev_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:16:44.827 04:33:41 blockdev_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:16:44.827 04:33:41 blockdev_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:16:44.827 04:33:41 blockdev_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:16:44.827 04:33:41 blockdev_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:16:44.827 04:33:41 blockdev_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:44.827 04:33:41 blockdev_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:16:44.827 04:33:41 blockdev_xnvme -- scripts/common.sh@345 -- # : 1 00:16:44.827 04:33:41 blockdev_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:44.827 04:33:41 blockdev_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:44.827 04:33:41 blockdev_xnvme -- scripts/common.sh@365 -- # decimal 1 00:16:44.827 04:33:41 blockdev_xnvme -- scripts/common.sh@353 -- # local d=1 00:16:44.827 04:33:41 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:44.827 04:33:41 blockdev_xnvme -- scripts/common.sh@355 -- # echo 1 00:16:44.827 04:33:41 blockdev_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:16:44.827 04:33:41 blockdev_xnvme -- scripts/common.sh@366 -- # decimal 2 00:16:44.827 04:33:41 blockdev_xnvme -- scripts/common.sh@353 -- # local d=2 00:16:44.827 04:33:41 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:44.827 04:33:41 blockdev_xnvme -- scripts/common.sh@355 -- # echo 2 00:16:44.827 04:33:41 blockdev_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:16:44.827 04:33:41 blockdev_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:44.827 04:33:41 blockdev_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:44.827 04:33:41 blockdev_xnvme -- scripts/common.sh@368 -- # return 0 00:16:44.827 04:33:41 blockdev_xnvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:44.827 04:33:41 blockdev_xnvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:44.827 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:44.827 --rc genhtml_branch_coverage=1 00:16:44.827 --rc genhtml_function_coverage=1 00:16:44.827 --rc genhtml_legend=1 00:16:44.827 --rc geninfo_all_blocks=1 00:16:44.827 --rc geninfo_unexecuted_blocks=1 00:16:44.827 00:16:44.827 ' 00:16:44.827 04:33:41 blockdev_xnvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:44.827 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:44.827 --rc genhtml_branch_coverage=1 00:16:44.827 --rc genhtml_function_coverage=1 00:16:44.827 --rc genhtml_legend=1 00:16:44.827 --rc geninfo_all_blocks=1 00:16:44.827 --rc geninfo_unexecuted_blocks=1 00:16:44.827 00:16:44.827 ' 00:16:44.827 04:33:41 blockdev_xnvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:44.827 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:44.827 --rc genhtml_branch_coverage=1 00:16:44.827 --rc genhtml_function_coverage=1 00:16:44.827 --rc genhtml_legend=1 00:16:44.827 --rc geninfo_all_blocks=1 00:16:44.827 --rc geninfo_unexecuted_blocks=1 00:16:44.827 00:16:44.827 ' 00:16:44.827 04:33:41 blockdev_xnvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:44.827 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:44.827 --rc genhtml_branch_coverage=1 00:16:44.827 --rc genhtml_function_coverage=1 00:16:44.827 --rc genhtml_legend=1 00:16:44.827 --rc geninfo_all_blocks=1 00:16:44.827 --rc geninfo_unexecuted_blocks=1 00:16:44.827 00:16:44.827 ' 00:16:44.827 04:33:41 blockdev_xnvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:16:44.827 04:33:41 blockdev_xnvme -- bdev/nbd_common.sh@6 -- # set -e 00:16:44.827 04:33:41 blockdev_xnvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:16:44.827 04:33:41 blockdev_xnvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:16:44.827 04:33:41 blockdev_xnvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:16:44.827 04:33:41 blockdev_xnvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:16:44.827 04:33:41 blockdev_xnvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:16:44.827 04:33:41 blockdev_xnvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:16:44.827 04:33:41 blockdev_xnvme -- bdev/blockdev.sh@20 -- # : 00:16:44.827 04:33:41 blockdev_xnvme -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:16:44.827 04:33:41 blockdev_xnvme -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:16:44.827 04:33:41 blockdev_xnvme -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:16:44.827 04:33:41 blockdev_xnvme -- bdev/blockdev.sh@711 -- # uname -s 00:16:44.827 04:33:41 blockdev_xnvme -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:16:44.827 04:33:41 blockdev_xnvme -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:16:44.827 04:33:41 blockdev_xnvme -- bdev/blockdev.sh@719 -- # test_type=xnvme 00:16:44.827 04:33:41 blockdev_xnvme -- bdev/blockdev.sh@720 -- # crypto_device= 00:16:44.827 04:33:41 blockdev_xnvme -- bdev/blockdev.sh@721 -- # dek= 00:16:44.827 04:33:41 blockdev_xnvme -- bdev/blockdev.sh@722 -- # env_ctx= 00:16:44.827 04:33:41 blockdev_xnvme -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:16:44.827 04:33:41 blockdev_xnvme -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:16:44.827 04:33:41 blockdev_xnvme -- bdev/blockdev.sh@727 -- # [[ xnvme == bdev ]] 00:16:44.827 04:33:41 blockdev_xnvme -- bdev/blockdev.sh@727 -- # [[ xnvme == crypto_* ]] 00:16:44.827 04:33:41 blockdev_xnvme -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:16:44.827 04:33:41 blockdev_xnvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=71992 00:16:44.827 04:33:41 blockdev_xnvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:16:44.827 04:33:41 blockdev_xnvme -- bdev/blockdev.sh@49 -- # waitforlisten 71992 00:16:44.827 04:33:41 blockdev_xnvme -- common/autotest_common.sh@835 -- # '[' -z 71992 ']' 00:16:44.827 04:33:41 blockdev_xnvme -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:44.827 04:33:41 blockdev_xnvme -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:44.827 04:33:41 blockdev_xnvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:16:44.827 04:33:41 blockdev_xnvme -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:44.827 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:44.827 04:33:41 blockdev_xnvme -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:44.827 04:33:41 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:44.827 [2024-11-27 04:33:41.368830] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:16:44.827 [2024-11-27 04:33:41.369137] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71992 ] 00:16:45.086 [2024-11-27 04:33:41.527963] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:45.086 [2024-11-27 04:33:41.630314] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:45.663 04:33:42 blockdev_xnvme -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:45.663 04:33:42 blockdev_xnvme -- common/autotest_common.sh@868 -- # return 0 00:16:45.663 04:33:42 blockdev_xnvme -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:16:45.663 04:33:42 blockdev_xnvme -- bdev/blockdev.sh@766 -- # setup_xnvme_conf 00:16:45.663 04:33:42 blockdev_xnvme -- bdev/blockdev.sh@88 -- # local io_mechanism=io_uring 00:16:45.663 04:33:42 blockdev_xnvme -- bdev/blockdev.sh@89 -- # local nvme nvmes 00:16:45.663 04:33:42 blockdev_xnvme -- bdev/blockdev.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:16:46.230 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:46.798 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:16:46.798 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:16:46.798 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:16:46.798 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:16:46.798 04:33:43 blockdev_xnvme -- bdev/blockdev.sh@92 -- # get_zoned_devs 00:16:46.798 04:33:43 blockdev_xnvme -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:16:46.798 04:33:43 blockdev_xnvme -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:16:46.798 04:33:43 blockdev_xnvme -- common/autotest_common.sh@1658 -- # local nvme bdf 00:16:46.798 04:33:43 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:16:46.798 04:33:43 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:16:46.798 04:33:43 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:16:46.798 04:33:43 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:16:46.798 04:33:43 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:16:46.798 04:33:43 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:16:46.798 04:33:43 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n2 00:16:46.798 04:33:43 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n2 00:16:46.798 04:33:43 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:16:46.798 04:33:43 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:16:46.798 04:33:43 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:16:46.798 04:33:43 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n3 00:16:46.798 04:33:43 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n3 00:16:46.798 04:33:43 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:16:46.798 04:33:43 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:16:46.798 04:33:43 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:16:46.798 04:33:43 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:16:46.798 04:33:43 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:16:46.798 04:33:43 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:16:46.798 04:33:43 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:16:46.798 04:33:43 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:16:46.798 04:33:43 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2c2n1 00:16:46.798 04:33:43 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme2c2n1 00:16:46.798 04:33:43 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2c2n1/queue/zoned ]] 00:16:46.798 04:33:43 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:16:46.798 04:33:43 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:16:46.798 04:33:43 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n1 00:16:46.798 04:33:43 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:16:46.798 04:33:43 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:16:46.798 04:33:43 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:16:46.798 04:33:43 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:16:46.798 04:33:43 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3n1 00:16:46.798 04:33:43 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme3n1 00:16:46.798 04:33:43 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:16:46.798 04:33:43 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:16:46.798 04:33:43 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:16:46.798 04:33:43 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n1 ]] 00:16:46.798 04:33:43 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:16:46.798 04:33:43 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:16:46.798 04:33:43 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:16:46.799 04:33:43 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n2 ]] 00:16:46.799 04:33:43 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:16:46.799 04:33:43 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:16:46.799 04:33:43 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:16:46.799 04:33:43 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n3 ]] 00:16:46.799 04:33:43 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:16:46.799 04:33:43 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:16:46.799 04:33:43 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:16:46.799 04:33:43 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme1n1 ]] 00:16:46.799 04:33:43 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:16:46.799 04:33:43 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:16:46.799 04:33:43 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:16:46.799 04:33:43 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n1 ]] 00:16:46.799 04:33:43 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:16:46.799 04:33:43 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:16:46.799 04:33:43 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:16:46.799 04:33:43 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme3n1 ]] 00:16:46.799 04:33:43 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:16:46.799 04:33:43 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:16:46.799 04:33:43 blockdev_xnvme -- bdev/blockdev.sh@99 -- # (( 6 > 0 )) 00:16:46.799 04:33:43 blockdev_xnvme -- bdev/blockdev.sh@100 -- # rpc_cmd 00:16:46.799 04:33:43 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.799 04:33:43 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:46.799 04:33:43 blockdev_xnvme -- bdev/blockdev.sh@100 -- # printf '%s\n' 'bdev_xnvme_create /dev/nvme0n1 nvme0n1 io_uring -c' 'bdev_xnvme_create /dev/nvme0n2 nvme0n2 io_uring -c' 'bdev_xnvme_create /dev/nvme0n3 nvme0n3 io_uring -c' 'bdev_xnvme_create /dev/nvme1n1 nvme1n1 io_uring -c' 'bdev_xnvme_create /dev/nvme2n1 nvme2n1 io_uring -c' 'bdev_xnvme_create /dev/nvme3n1 nvme3n1 io_uring -c' 00:16:46.799 nvme0n1 00:16:46.799 nvme0n2 00:16:46.799 nvme0n3 00:16:46.799 nvme1n1 00:16:46.799 nvme2n1 00:16:46.799 nvme3n1 00:16:46.799 04:33:43 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.799 04:33:43 blockdev_xnvme -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:16:46.799 04:33:43 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.799 04:33:43 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:46.799 04:33:43 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.799 04:33:43 blockdev_xnvme -- bdev/blockdev.sh@777 -- # cat 00:16:46.799 04:33:43 blockdev_xnvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:16:46.799 04:33:43 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.799 04:33:43 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:46.799 04:33:43 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.799 04:33:43 blockdev_xnvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:16:46.799 04:33:43 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.799 04:33:43 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:46.799 04:33:43 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.799 04:33:43 blockdev_xnvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:16:46.799 04:33:43 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.799 04:33:43 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:47.059 04:33:43 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.059 04:33:43 blockdev_xnvme -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:16:47.059 04:33:43 blockdev_xnvme -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:16:47.059 04:33:43 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.059 04:33:43 blockdev_xnvme -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:16:47.059 04:33:43 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:47.059 04:33:43 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.059 04:33:43 blockdev_xnvme -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:16:47.059 04:33:43 blockdev_xnvme -- bdev/blockdev.sh@786 -- # jq -r .name 00:16:47.059 04:33:43 blockdev_xnvme -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "dee578f0-1d33-486d-b76f-7a13bfe8ffcd"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "dee578f0-1d33-486d-b76f-7a13bfe8ffcd",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n2",' ' "aliases": [' ' "e1b1a04a-aa62-4d2f-8b0d-7fa3927b77a4"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "e1b1a04a-aa62-4d2f-8b0d-7fa3927b77a4",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n3",' ' "aliases": [' ' "225513f7-cdc9-45d5-b7d1-28da4175ab1e"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "225513f7-cdc9-45d5-b7d1-28da4175ab1e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "5df7184e-d38b-4ee9-8c1f-26f7d0a2dee2"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "5df7184e-d38b-4ee9-8c1f-26f7d0a2dee2",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "596410a3-b99a-4779-9284-127b2ad665ee"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "596410a3-b99a-4779-9284-127b2ad665ee",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "cfc303e2-941c-4575-a3e5-10647388c102"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "cfc303e2-941c-4575-a3e5-10647388c102",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:16:47.059 04:33:43 blockdev_xnvme -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:16:47.059 04:33:43 blockdev_xnvme -- bdev/blockdev.sh@789 -- # hello_world_bdev=nvme0n1 00:16:47.059 04:33:43 blockdev_xnvme -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:16:47.059 04:33:43 blockdev_xnvme -- bdev/blockdev.sh@791 -- # killprocess 71992 00:16:47.059 04:33:43 blockdev_xnvme -- common/autotest_common.sh@954 -- # '[' -z 71992 ']' 00:16:47.059 04:33:43 blockdev_xnvme -- common/autotest_common.sh@958 -- # kill -0 71992 00:16:47.059 04:33:43 blockdev_xnvme -- common/autotest_common.sh@959 -- # uname 00:16:47.059 04:33:43 blockdev_xnvme -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:47.059 04:33:43 blockdev_xnvme -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71992 00:16:47.059 04:33:43 blockdev_xnvme -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:47.059 04:33:43 blockdev_xnvme -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:47.059 killing process with pid 71992 00:16:47.059 04:33:43 blockdev_xnvme -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71992' 00:16:47.059 04:33:43 blockdev_xnvme -- common/autotest_common.sh@973 -- # kill 71992 00:16:47.059 04:33:43 blockdev_xnvme -- common/autotest_common.sh@978 -- # wait 71992 00:16:48.467 04:33:45 blockdev_xnvme -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:16:48.467 04:33:45 blockdev_xnvme -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:16:48.467 04:33:45 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:16:48.467 04:33:45 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:48.467 04:33:45 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:48.467 ************************************ 00:16:48.467 START TEST bdev_hello_world 00:16:48.467 ************************************ 00:16:48.467 04:33:45 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:16:48.725 [2024-11-27 04:33:45.088507] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:16:48.725 [2024-11-27 04:33:45.088634] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72267 ] 00:16:48.725 [2024-11-27 04:33:45.240710] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:48.984 [2024-11-27 04:33:45.343169] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:49.241 [2024-11-27 04:33:45.715626] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:16:49.241 [2024-11-27 04:33:45.715827] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev nvme0n1 00:16:49.241 [2024-11-27 04:33:45.715852] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:16:49.241 [2024-11-27 04:33:45.717780] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:16:49.241 [2024-11-27 04:33:45.718345] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:16:49.241 [2024-11-27 04:33:45.718370] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:16:49.241 [2024-11-27 04:33:45.718526] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:16:49.241 00:16:49.241 [2024-11-27 04:33:45.718549] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:16:50.178 ************************************ 00:16:50.178 END TEST bdev_hello_world 00:16:50.178 ************************************ 00:16:50.178 00:16:50.178 real 0m1.427s 00:16:50.178 user 0m1.094s 00:16:50.178 sys 0m0.184s 00:16:50.178 04:33:46 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:50.178 04:33:46 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:16:50.178 04:33:46 blockdev_xnvme -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:16:50.178 04:33:46 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:50.178 04:33:46 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:50.178 04:33:46 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:50.178 ************************************ 00:16:50.178 START TEST bdev_bounds 00:16:50.178 ************************************ 00:16:50.178 Process bdevio pid: 72304 00:16:50.178 04:33:46 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:16:50.178 04:33:46 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=72304 00:16:50.178 04:33:46 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:16:50.178 04:33:46 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 72304' 00:16:50.178 04:33:46 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 72304 00:16:50.178 04:33:46 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 72304 ']' 00:16:50.178 04:33:46 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:16:50.178 04:33:46 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:50.178 04:33:46 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:50.178 04:33:46 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:50.178 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:50.178 04:33:46 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:50.178 04:33:46 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:16:50.178 [2024-11-27 04:33:46.574250] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:16:50.178 [2024-11-27 04:33:46.574371] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72304 ] 00:16:50.178 [2024-11-27 04:33:46.731334] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:50.435 [2024-11-27 04:33:46.837559] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:50.435 [2024-11-27 04:33:46.837948] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:50.435 [2024-11-27 04:33:46.838133] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:51.000 04:33:47 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:51.000 04:33:47 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:16:51.000 04:33:47 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:16:51.000 I/O targets: 00:16:51.000 nvme0n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:16:51.000 nvme0n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:16:51.000 nvme0n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:16:51.000 nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:16:51.000 nvme2n1: 262144 blocks of 4096 bytes (1024 MiB) 00:16:51.000 nvme3n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:16:51.000 00:16:51.000 00:16:51.000 CUnit - A unit testing framework for C - Version 2.1-3 00:16:51.000 http://cunit.sourceforge.net/ 00:16:51.000 00:16:51.000 00:16:51.000 Suite: bdevio tests on: nvme3n1 00:16:51.000 Test: blockdev write read block ...passed 00:16:51.000 Test: blockdev write zeroes read block ...passed 00:16:51.000 Test: blockdev write zeroes read no split ...passed 00:16:51.000 Test: blockdev write zeroes read split ...passed 00:16:51.259 Test: blockdev write zeroes read split partial ...passed 00:16:51.259 Test: blockdev reset ...passed 00:16:51.259 Test: blockdev write read 8 blocks ...passed 00:16:51.259 Test: blockdev write read size > 128k ...passed 00:16:51.259 Test: blockdev write read invalid size ...passed 00:16:51.259 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:16:51.259 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:16:51.259 Test: blockdev write read max offset ...passed 00:16:51.259 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:16:51.259 Test: blockdev writev readv 8 blocks ...passed 00:16:51.259 Test: blockdev writev readv 30 x 1block ...passed 00:16:51.259 Test: blockdev writev readv block ...passed 00:16:51.259 Test: blockdev writev readv size > 128k ...passed 00:16:51.259 Test: blockdev writev readv size > 128k in two iovs ...passed 00:16:51.259 Test: blockdev comparev and writev ...passed 00:16:51.259 Test: blockdev nvme passthru rw ...passed 00:16:51.259 Test: blockdev nvme passthru vendor specific ...passed 00:16:51.259 Test: blockdev nvme admin passthru ...passed 00:16:51.259 Test: blockdev copy ...passed 00:16:51.259 Suite: bdevio tests on: nvme2n1 00:16:51.259 Test: blockdev write read block ...passed 00:16:51.259 Test: blockdev write zeroes read block ...passed 00:16:51.259 Test: blockdev write zeroes read no split ...passed 00:16:51.259 Test: blockdev write zeroes read split ...passed 00:16:51.259 Test: blockdev write zeroes read split partial ...passed 00:16:51.259 Test: blockdev reset ...passed 00:16:51.259 Test: blockdev write read 8 blocks ...passed 00:16:51.259 Test: blockdev write read size > 128k ...passed 00:16:51.259 Test: blockdev write read invalid size ...passed 00:16:51.259 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:16:51.259 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:16:51.259 Test: blockdev write read max offset ...passed 00:16:51.259 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:16:51.259 Test: blockdev writev readv 8 blocks ...passed 00:16:51.259 Test: blockdev writev readv 30 x 1block ...passed 00:16:51.259 Test: blockdev writev readv block ...passed 00:16:51.259 Test: blockdev writev readv size > 128k ...passed 00:16:51.259 Test: blockdev writev readv size > 128k in two iovs ...passed 00:16:51.259 Test: blockdev comparev and writev ...passed 00:16:51.259 Test: blockdev nvme passthru rw ...passed 00:16:51.259 Test: blockdev nvme passthru vendor specific ...passed 00:16:51.259 Test: blockdev nvme admin passthru ...passed 00:16:51.259 Test: blockdev copy ...passed 00:16:51.259 Suite: bdevio tests on: nvme1n1 00:16:51.259 Test: blockdev write read block ...passed 00:16:51.259 Test: blockdev write zeroes read block ...passed 00:16:51.259 Test: blockdev write zeroes read no split ...passed 00:16:51.259 Test: blockdev write zeroes read split ...passed 00:16:51.259 Test: blockdev write zeroes read split partial ...passed 00:16:51.259 Test: blockdev reset ...passed 00:16:51.259 Test: blockdev write read 8 blocks ...passed 00:16:51.259 Test: blockdev write read size > 128k ...passed 00:16:51.259 Test: blockdev write read invalid size ...passed 00:16:51.259 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:16:51.259 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:16:51.259 Test: blockdev write read max offset ...passed 00:16:51.259 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:16:51.259 Test: blockdev writev readv 8 blocks ...passed 00:16:51.259 Test: blockdev writev readv 30 x 1block ...passed 00:16:51.259 Test: blockdev writev readv block ...passed 00:16:51.259 Test: blockdev writev readv size > 128k ...passed 00:16:51.259 Test: blockdev writev readv size > 128k in two iovs ...passed 00:16:51.259 Test: blockdev comparev and writev ...passed 00:16:51.259 Test: blockdev nvme passthru rw ...passed 00:16:51.259 Test: blockdev nvme passthru vendor specific ...passed 00:16:51.259 Test: blockdev nvme admin passthru ...passed 00:16:51.259 Test: blockdev copy ...passed 00:16:51.259 Suite: bdevio tests on: nvme0n3 00:16:51.259 Test: blockdev write read block ...passed 00:16:51.259 Test: blockdev write zeroes read block ...passed 00:16:51.259 Test: blockdev write zeroes read no split ...passed 00:16:51.259 Test: blockdev write zeroes read split ...passed 00:16:51.259 Test: blockdev write zeroes read split partial ...passed 00:16:51.259 Test: blockdev reset ...passed 00:16:51.259 Test: blockdev write read 8 blocks ...passed 00:16:51.259 Test: blockdev write read size > 128k ...passed 00:16:51.259 Test: blockdev write read invalid size ...passed 00:16:51.259 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:16:51.259 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:16:51.259 Test: blockdev write read max offset ...passed 00:16:51.259 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:16:51.259 Test: blockdev writev readv 8 blocks ...passed 00:16:51.259 Test: blockdev writev readv 30 x 1block ...passed 00:16:51.259 Test: blockdev writev readv block ...passed 00:16:51.259 Test: blockdev writev readv size > 128k ...passed 00:16:51.259 Test: blockdev writev readv size > 128k in two iovs ...passed 00:16:51.259 Test: blockdev comparev and writev ...passed 00:16:51.259 Test: blockdev nvme passthru rw ...passed 00:16:51.259 Test: blockdev nvme passthru vendor specific ...passed 00:16:51.259 Test: blockdev nvme admin passthru ...passed 00:16:51.259 Test: blockdev copy ...passed 00:16:51.259 Suite: bdevio tests on: nvme0n2 00:16:51.259 Test: blockdev write read block ...passed 00:16:51.518 Test: blockdev write zeroes read block ...passed 00:16:51.518 Test: blockdev write zeroes read no split ...passed 00:16:51.518 Test: blockdev write zeroes read split ...passed 00:16:51.518 Test: blockdev write zeroes read split partial ...passed 00:16:51.518 Test: blockdev reset ...passed 00:16:51.518 Test: blockdev write read 8 blocks ...passed 00:16:51.518 Test: blockdev write read size > 128k ...passed 00:16:51.518 Test: blockdev write read invalid size ...passed 00:16:51.518 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:16:51.518 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:16:51.518 Test: blockdev write read max offset ...passed 00:16:51.518 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:16:51.518 Test: blockdev writev readv 8 blocks ...passed 00:16:51.518 Test: blockdev writev readv 30 x 1block ...passed 00:16:51.518 Test: blockdev writev readv block ...passed 00:16:51.518 Test: blockdev writev readv size > 128k ...passed 00:16:51.518 Test: blockdev writev readv size > 128k in two iovs ...passed 00:16:51.518 Test: blockdev comparev and writev ...passed 00:16:51.518 Test: blockdev nvme passthru rw ...passed 00:16:51.518 Test: blockdev nvme passthru vendor specific ...passed 00:16:51.518 Test: blockdev nvme admin passthru ...passed 00:16:51.518 Test: blockdev copy ...passed 00:16:51.518 Suite: bdevio tests on: nvme0n1 00:16:51.518 Test: blockdev write read block ...passed 00:16:51.518 Test: blockdev write zeroes read block ...passed 00:16:51.518 Test: blockdev write zeroes read no split ...passed 00:16:51.518 Test: blockdev write zeroes read split ...passed 00:16:51.518 Test: blockdev write zeroes read split partial ...passed 00:16:51.518 Test: blockdev reset ...passed 00:16:51.518 Test: blockdev write read 8 blocks ...passed 00:16:51.518 Test: blockdev write read size > 128k ...passed 00:16:51.518 Test: blockdev write read invalid size ...passed 00:16:51.518 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:16:51.518 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:16:51.518 Test: blockdev write read max offset ...passed 00:16:51.518 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:16:51.518 Test: blockdev writev readv 8 blocks ...passed 00:16:51.518 Test: blockdev writev readv 30 x 1block ...passed 00:16:51.518 Test: blockdev writev readv block ...passed 00:16:51.518 Test: blockdev writev readv size > 128k ...passed 00:16:51.518 Test: blockdev writev readv size > 128k in two iovs ...passed 00:16:51.518 Test: blockdev comparev and writev ...passed 00:16:51.518 Test: blockdev nvme passthru rw ...passed 00:16:51.518 Test: blockdev nvme passthru vendor specific ...passed 00:16:51.518 Test: blockdev nvme admin passthru ...passed 00:16:51.518 Test: blockdev copy ...passed 00:16:51.518 00:16:51.518 Run Summary: Type Total Ran Passed Failed Inactive 00:16:51.518 suites 6 6 n/a 0 0 00:16:51.518 tests 138 138 138 0 0 00:16:51.518 asserts 780 780 780 0 n/a 00:16:51.518 00:16:51.518 Elapsed time = 1.157 seconds 00:16:51.518 0 00:16:51.518 04:33:48 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 72304 00:16:51.518 04:33:48 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 72304 ']' 00:16:51.518 04:33:48 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 72304 00:16:51.518 04:33:48 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:16:51.518 04:33:48 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:51.518 04:33:48 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72304 00:16:51.518 04:33:48 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:51.518 04:33:48 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:51.518 04:33:48 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72304' 00:16:51.518 killing process with pid 72304 00:16:51.518 04:33:48 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@973 -- # kill 72304 00:16:51.518 04:33:48 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@978 -- # wait 72304 00:16:52.451 04:33:48 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:16:52.451 00:16:52.451 real 0m2.259s 00:16:52.451 user 0m5.635s 00:16:52.451 sys 0m0.294s 00:16:52.451 04:33:48 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:52.451 04:33:48 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:16:52.451 ************************************ 00:16:52.451 END TEST bdev_bounds 00:16:52.451 ************************************ 00:16:52.451 04:33:48 blockdev_xnvme -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '' 00:16:52.451 04:33:48 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:16:52.451 04:33:48 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:52.451 04:33:48 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:52.451 ************************************ 00:16:52.451 START TEST bdev_nbd 00:16:52.451 ************************************ 00:16:52.451 04:33:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '' 00:16:52.451 04:33:48 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:16:52.451 04:33:48 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:16:52.451 04:33:48 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:52.451 04:33:48 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:16:52.451 04:33:48 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:16:52.451 04:33:48 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:16:52.451 04:33:48 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:16:52.451 04:33:48 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:16:52.451 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:16:52.451 04:33:48 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:16:52.451 04:33:48 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:16:52.451 04:33:48 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:16:52.451 04:33:48 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:16:52.451 04:33:48 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:16:52.451 04:33:48 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:16:52.451 04:33:48 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:16:52.451 04:33:48 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=72358 00:16:52.451 04:33:48 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:16:52.451 04:33:48 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 72358 /var/tmp/spdk-nbd.sock 00:16:52.451 04:33:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 72358 ']' 00:16:52.451 04:33:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:16:52.451 04:33:48 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:16:52.451 04:33:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:52.451 04:33:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:16:52.451 04:33:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:52.451 04:33:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:16:52.451 [2024-11-27 04:33:48.913783] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:16:52.451 [2024-11-27 04:33:48.914091] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:52.709 [2024-11-27 04:33:49.075094] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:52.709 [2024-11-27 04:33:49.178912] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:53.275 04:33:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:53.275 04:33:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:16:53.275 04:33:49 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' 00:16:53.275 04:33:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:53.275 04:33:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:16:53.275 04:33:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:16:53.275 04:33:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' 00:16:53.275 04:33:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:53.275 04:33:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:16:53.275 04:33:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:16:53.275 04:33:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:16:53.275 04:33:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:16:53.275 04:33:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:16:53.275 04:33:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:16:53.275 04:33:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 00:16:53.532 04:33:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:16:53.532 04:33:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:16:53.532 04:33:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:16:53.532 04:33:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:16:53.532 04:33:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:16:53.532 04:33:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:53.532 04:33:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:53.532 04:33:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:16:53.532 04:33:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:16:53.532 04:33:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:53.532 04:33:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:53.532 04:33:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:53.532 1+0 records in 00:16:53.532 1+0 records out 00:16:53.532 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000662885 s, 6.2 MB/s 00:16:53.532 04:33:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:53.532 04:33:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:16:53.532 04:33:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:53.532 04:33:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:53.533 04:33:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:16:53.533 04:33:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:16:53.533 04:33:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:16:53.533 04:33:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n2 00:16:53.791 04:33:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:16:53.791 04:33:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:16:53.791 04:33:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:16:53.791 04:33:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:16:53.791 04:33:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:16:53.791 04:33:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:53.791 04:33:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:53.791 04:33:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:16:53.791 04:33:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:16:53.791 04:33:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:53.791 04:33:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:53.791 04:33:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:53.791 1+0 records in 00:16:53.791 1+0 records out 00:16:53.791 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00106708 s, 3.8 MB/s 00:16:53.791 04:33:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:53.791 04:33:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:16:53.791 04:33:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:53.791 04:33:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:53.791 04:33:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:16:53.791 04:33:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:16:53.791 04:33:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:16:53.791 04:33:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n3 00:16:54.050 04:33:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:16:54.050 04:33:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:16:54.050 04:33:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:16:54.050 04:33:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:16:54.050 04:33:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:16:54.050 04:33:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:54.050 04:33:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:54.050 04:33:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:16:54.050 04:33:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:16:54.050 04:33:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:54.050 04:33:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:54.050 04:33:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:54.050 1+0 records in 00:16:54.050 1+0 records out 00:16:54.050 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000536658 s, 7.6 MB/s 00:16:54.050 04:33:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:54.050 04:33:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:16:54.050 04:33:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:54.050 04:33:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:54.050 04:33:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:16:54.050 04:33:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:16:54.050 04:33:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:16:54.051 04:33:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 00:16:54.309 04:33:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:16:54.309 04:33:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:16:54.309 04:33:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:16:54.309 04:33:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:16:54.309 04:33:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:16:54.309 04:33:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:54.309 04:33:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:54.309 04:33:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:16:54.309 04:33:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:16:54.309 04:33:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:54.309 04:33:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:54.309 04:33:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:54.309 1+0 records in 00:16:54.309 1+0 records out 00:16:54.309 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0011144 s, 3.7 MB/s 00:16:54.309 04:33:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:54.309 04:33:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:16:54.309 04:33:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:54.309 04:33:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:54.309 04:33:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:16:54.309 04:33:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:16:54.309 04:33:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:16:54.309 04:33:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 00:16:54.566 04:33:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:16:54.566 04:33:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:16:54.566 04:33:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:16:54.566 04:33:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:16:54.566 04:33:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:16:54.566 04:33:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:54.566 04:33:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:54.566 04:33:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:16:54.566 04:33:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:16:54.566 04:33:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:54.566 04:33:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:54.566 04:33:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:54.566 1+0 records in 00:16:54.566 1+0 records out 00:16:54.566 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000924252 s, 4.4 MB/s 00:16:54.566 04:33:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:54.566 04:33:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:16:54.566 04:33:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:54.566 04:33:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:54.566 04:33:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:16:54.566 04:33:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:16:54.566 04:33:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:16:54.566 04:33:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 00:16:54.824 04:33:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:16:54.824 04:33:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:16:54.824 04:33:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:16:54.824 04:33:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:16:54.824 04:33:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:16:54.824 04:33:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:54.824 04:33:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:54.824 04:33:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:16:54.824 04:33:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:16:54.824 04:33:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:54.824 04:33:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:54.824 04:33:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:54.824 1+0 records in 00:16:54.824 1+0 records out 00:16:54.824 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000957636 s, 4.3 MB/s 00:16:54.824 04:33:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:54.824 04:33:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:16:54.824 04:33:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:54.824 04:33:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:54.824 04:33:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:16:54.824 04:33:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:16:54.824 04:33:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:16:54.824 04:33:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:16:55.146 04:33:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:16:55.146 { 00:16:55.146 "nbd_device": "/dev/nbd0", 00:16:55.146 "bdev_name": "nvme0n1" 00:16:55.146 }, 00:16:55.146 { 00:16:55.146 "nbd_device": "/dev/nbd1", 00:16:55.146 "bdev_name": "nvme0n2" 00:16:55.146 }, 00:16:55.146 { 00:16:55.146 "nbd_device": "/dev/nbd2", 00:16:55.146 "bdev_name": "nvme0n3" 00:16:55.146 }, 00:16:55.146 { 00:16:55.146 "nbd_device": "/dev/nbd3", 00:16:55.146 "bdev_name": "nvme1n1" 00:16:55.146 }, 00:16:55.146 { 00:16:55.146 "nbd_device": "/dev/nbd4", 00:16:55.146 "bdev_name": "nvme2n1" 00:16:55.146 }, 00:16:55.146 { 00:16:55.146 "nbd_device": "/dev/nbd5", 00:16:55.146 "bdev_name": "nvme3n1" 00:16:55.146 } 00:16:55.146 ]' 00:16:55.146 04:33:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:16:55.146 04:33:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:16:55.146 { 00:16:55.146 "nbd_device": "/dev/nbd0", 00:16:55.146 "bdev_name": "nvme0n1" 00:16:55.146 }, 00:16:55.146 { 00:16:55.146 "nbd_device": "/dev/nbd1", 00:16:55.146 "bdev_name": "nvme0n2" 00:16:55.146 }, 00:16:55.146 { 00:16:55.146 "nbd_device": "/dev/nbd2", 00:16:55.146 "bdev_name": "nvme0n3" 00:16:55.146 }, 00:16:55.146 { 00:16:55.146 "nbd_device": "/dev/nbd3", 00:16:55.146 "bdev_name": "nvme1n1" 00:16:55.146 }, 00:16:55.146 { 00:16:55.146 "nbd_device": "/dev/nbd4", 00:16:55.146 "bdev_name": "nvme2n1" 00:16:55.146 }, 00:16:55.146 { 00:16:55.146 "nbd_device": "/dev/nbd5", 00:16:55.146 "bdev_name": "nvme3n1" 00:16:55.146 } 00:16:55.146 ]' 00:16:55.146 04:33:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:16:55.146 04:33:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:16:55.146 04:33:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:55.146 04:33:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:16:55.146 04:33:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:55.146 04:33:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:16:55.146 04:33:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:55.146 04:33:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:16:55.404 04:33:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:55.404 04:33:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:55.404 04:33:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:55.404 04:33:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:55.404 04:33:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:55.404 04:33:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:55.404 04:33:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:55.404 04:33:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:55.404 04:33:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:55.404 04:33:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:16:55.404 04:33:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:55.404 04:33:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:55.404 04:33:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:55.404 04:33:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:55.404 04:33:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:55.404 04:33:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:55.662 04:33:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:55.662 04:33:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:55.662 04:33:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:55.662 04:33:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:16:55.662 04:33:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:16:55.662 04:33:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:16:55.662 04:33:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:16:55.662 04:33:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:55.662 04:33:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:55.662 04:33:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:16:55.662 04:33:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:55.662 04:33:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:55.662 04:33:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:55.662 04:33:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:16:55.920 04:33:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:16:55.920 04:33:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:16:55.920 04:33:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:16:55.920 04:33:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:55.920 04:33:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:55.920 04:33:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:16:55.920 04:33:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:55.920 04:33:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:55.920 04:33:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:55.920 04:33:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:16:56.179 04:33:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:16:56.179 04:33:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:16:56.179 04:33:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:16:56.179 04:33:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:56.179 04:33:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:56.179 04:33:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:16:56.179 04:33:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:56.179 04:33:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:56.179 04:33:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:56.179 04:33:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:16:56.438 04:33:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:16:56.438 04:33:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:16:56.438 04:33:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:16:56.438 04:33:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:56.438 04:33:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:56.438 04:33:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:16:56.438 04:33:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:56.438 04:33:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:56.438 04:33:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:16:56.438 04:33:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:56.438 04:33:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:16:56.696 04:33:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:16:56.696 04:33:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:16:56.696 04:33:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:16:56.696 04:33:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:16:56.696 04:33:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:16:56.696 04:33:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:16:56.696 04:33:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:16:56.696 04:33:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:16:56.696 04:33:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:16:56.696 04:33:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:16:56.696 04:33:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:16:56.696 04:33:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:16:56.696 04:33:53 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:16:56.696 04:33:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:56.696 04:33:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:16:56.696 04:33:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:16:56.696 04:33:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:16:56.696 04:33:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:16:56.696 04:33:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:16:56.696 04:33:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:56.696 04:33:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:16:56.696 04:33:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:56.696 04:33:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:16:56.696 04:33:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:56.696 04:33:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:16:56.696 04:33:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:56.696 04:33:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:16:56.696 04:33:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 /dev/nbd0 00:16:56.954 /dev/nbd0 00:16:56.954 04:33:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:56.954 04:33:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:56.954 04:33:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:16:56.954 04:33:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:16:56.954 04:33:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:56.954 04:33:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:56.954 04:33:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:16:56.954 04:33:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:16:56.954 04:33:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:56.954 04:33:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:56.954 04:33:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:56.954 1+0 records in 00:16:56.954 1+0 records out 00:16:56.954 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000385176 s, 10.6 MB/s 00:16:56.954 04:33:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:56.954 04:33:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:16:56.954 04:33:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:56.954 04:33:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:56.954 04:33:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:16:56.954 04:33:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:56.954 04:33:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:16:56.954 04:33:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n2 /dev/nbd1 00:16:57.214 /dev/nbd1 00:16:57.214 04:33:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:16:57.214 04:33:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:16:57.214 04:33:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:16:57.214 04:33:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:16:57.214 04:33:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:57.214 04:33:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:57.214 04:33:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:16:57.214 04:33:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:16:57.214 04:33:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:57.214 04:33:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:57.214 04:33:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:57.214 1+0 records in 00:16:57.214 1+0 records out 00:16:57.214 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00135064 s, 3.0 MB/s 00:16:57.214 04:33:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:57.214 04:33:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:16:57.214 04:33:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:57.214 04:33:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:57.214 04:33:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:16:57.214 04:33:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:57.214 04:33:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:16:57.214 04:33:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n3 /dev/nbd10 00:16:57.473 /dev/nbd10 00:16:57.473 04:33:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:16:57.473 04:33:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:16:57.473 04:33:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:16:57.473 04:33:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:16:57.473 04:33:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:57.473 04:33:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:57.473 04:33:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:16:57.473 04:33:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:16:57.473 04:33:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:57.473 04:33:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:57.473 04:33:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:57.473 1+0 records in 00:16:57.473 1+0 records out 00:16:57.473 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00108327 s, 3.8 MB/s 00:16:57.473 04:33:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:57.473 04:33:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:16:57.473 04:33:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:57.473 04:33:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:57.473 04:33:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:16:57.473 04:33:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:57.473 04:33:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:16:57.473 04:33:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 /dev/nbd11 00:16:57.731 /dev/nbd11 00:16:57.731 04:33:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:16:57.731 04:33:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:16:57.731 04:33:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:16:57.731 04:33:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:16:57.731 04:33:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:57.731 04:33:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:57.731 04:33:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:16:57.731 04:33:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:16:57.731 04:33:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:57.731 04:33:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:57.731 04:33:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:57.731 1+0 records in 00:16:57.731 1+0 records out 00:16:57.731 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000584049 s, 7.0 MB/s 00:16:57.731 04:33:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:57.731 04:33:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:16:57.731 04:33:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:57.731 04:33:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:57.731 04:33:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:16:57.731 04:33:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:57.731 04:33:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:16:57.731 04:33:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 /dev/nbd12 00:16:57.990 /dev/nbd12 00:16:57.990 04:33:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:16:57.990 04:33:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:16:57.990 04:33:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:16:57.990 04:33:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:16:57.990 04:33:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:57.990 04:33:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:57.990 04:33:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:16:57.990 04:33:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:16:57.990 04:33:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:57.990 04:33:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:57.990 04:33:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:57.990 1+0 records in 00:16:57.990 1+0 records out 00:16:57.990 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00101877 s, 4.0 MB/s 00:16:57.990 04:33:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:57.990 04:33:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:16:57.990 04:33:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:57.990 04:33:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:57.990 04:33:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:16:57.990 04:33:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:57.990 04:33:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:16:57.990 04:33:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 /dev/nbd13 00:16:58.248 /dev/nbd13 00:16:58.248 04:33:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:16:58.248 04:33:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:16:58.248 04:33:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:16:58.248 04:33:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:16:58.248 04:33:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:58.248 04:33:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:58.248 04:33:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:16:58.248 04:33:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:16:58.248 04:33:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:58.248 04:33:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:58.248 04:33:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:58.248 1+0 records in 00:16:58.248 1+0 records out 00:16:58.248 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00064133 s, 6.4 MB/s 00:16:58.248 04:33:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:58.248 04:33:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:16:58.248 04:33:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:58.248 04:33:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:58.248 04:33:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:16:58.248 04:33:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:58.248 04:33:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:16:58.248 04:33:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:16:58.248 04:33:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:58.248 04:33:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:16:58.536 04:33:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:16:58.536 { 00:16:58.536 "nbd_device": "/dev/nbd0", 00:16:58.536 "bdev_name": "nvme0n1" 00:16:58.536 }, 00:16:58.536 { 00:16:58.536 "nbd_device": "/dev/nbd1", 00:16:58.536 "bdev_name": "nvme0n2" 00:16:58.536 }, 00:16:58.536 { 00:16:58.536 "nbd_device": "/dev/nbd10", 00:16:58.536 "bdev_name": "nvme0n3" 00:16:58.536 }, 00:16:58.536 { 00:16:58.536 "nbd_device": "/dev/nbd11", 00:16:58.536 "bdev_name": "nvme1n1" 00:16:58.536 }, 00:16:58.536 { 00:16:58.536 "nbd_device": "/dev/nbd12", 00:16:58.536 "bdev_name": "nvme2n1" 00:16:58.536 }, 00:16:58.536 { 00:16:58.536 "nbd_device": "/dev/nbd13", 00:16:58.536 "bdev_name": "nvme3n1" 00:16:58.536 } 00:16:58.536 ]' 00:16:58.536 04:33:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:16:58.536 { 00:16:58.536 "nbd_device": "/dev/nbd0", 00:16:58.536 "bdev_name": "nvme0n1" 00:16:58.536 }, 00:16:58.536 { 00:16:58.536 "nbd_device": "/dev/nbd1", 00:16:58.536 "bdev_name": "nvme0n2" 00:16:58.536 }, 00:16:58.536 { 00:16:58.536 "nbd_device": "/dev/nbd10", 00:16:58.536 "bdev_name": "nvme0n3" 00:16:58.536 }, 00:16:58.536 { 00:16:58.536 "nbd_device": "/dev/nbd11", 00:16:58.536 "bdev_name": "nvme1n1" 00:16:58.536 }, 00:16:58.536 { 00:16:58.536 "nbd_device": "/dev/nbd12", 00:16:58.536 "bdev_name": "nvme2n1" 00:16:58.536 }, 00:16:58.536 { 00:16:58.536 "nbd_device": "/dev/nbd13", 00:16:58.536 "bdev_name": "nvme3n1" 00:16:58.536 } 00:16:58.536 ]' 00:16:58.536 04:33:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:16:58.536 04:33:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:16:58.536 /dev/nbd1 00:16:58.536 /dev/nbd10 00:16:58.536 /dev/nbd11 00:16:58.536 /dev/nbd12 00:16:58.536 /dev/nbd13' 00:16:58.536 04:33:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:16:58.536 04:33:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:16:58.536 /dev/nbd1 00:16:58.536 /dev/nbd10 00:16:58.536 /dev/nbd11 00:16:58.536 /dev/nbd12 00:16:58.536 /dev/nbd13' 00:16:58.536 04:33:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:16:58.536 04:33:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:16:58.536 04:33:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:16:58.536 04:33:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:16:58.536 04:33:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:16:58.536 04:33:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:16:58.536 04:33:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:16:58.536 04:33:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:16:58.536 04:33:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:16:58.536 04:33:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:16:58.536 04:33:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:16:58.536 256+0 records in 00:16:58.536 256+0 records out 00:16:58.536 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00676446 s, 155 MB/s 00:16:58.536 04:33:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:16:58.536 04:33:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:16:58.794 256+0 records in 00:16:58.794 256+0 records out 00:16:58.794 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.205004 s, 5.1 MB/s 00:16:58.794 04:33:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:16:58.794 04:33:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:16:59.052 256+0 records in 00:16:59.052 256+0 records out 00:16:59.052 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.243669 s, 4.3 MB/s 00:16:59.052 04:33:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:16:59.052 04:33:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:16:59.052 256+0 records in 00:16:59.052 256+0 records out 00:16:59.052 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.217196 s, 4.8 MB/s 00:16:59.052 04:33:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:16:59.052 04:33:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:16:59.310 256+0 records in 00:16:59.310 256+0 records out 00:16:59.310 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.236161 s, 4.4 MB/s 00:16:59.310 04:33:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:16:59.310 04:33:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:16:59.570 256+0 records in 00:16:59.570 256+0 records out 00:16:59.570 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.245747 s, 4.3 MB/s 00:16:59.570 04:33:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:16:59.570 04:33:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:16:59.831 256+0 records in 00:16:59.831 256+0 records out 00:16:59.831 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.286567 s, 3.7 MB/s 00:16:59.831 04:33:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:16:59.831 04:33:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:16:59.831 04:33:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:16:59.831 04:33:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:16:59.831 04:33:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:16:59.831 04:33:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:16:59.831 04:33:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:16:59.831 04:33:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:16:59.831 04:33:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:17:00.093 04:33:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:17:00.093 04:33:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:17:00.093 04:33:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:17:00.093 04:33:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:17:00.093 04:33:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:17:00.093 04:33:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:17:00.093 04:33:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:17:00.093 04:33:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:17:00.093 04:33:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:17:00.093 04:33:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:17:00.093 04:33:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:17:00.093 04:33:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:17:00.093 04:33:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:00.093 04:33:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:17:00.093 04:33:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:00.093 04:33:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:17:00.093 04:33:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:00.093 04:33:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:17:00.356 04:33:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:00.356 04:33:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:00.356 04:33:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:00.356 04:33:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:00.356 04:33:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:00.356 04:33:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:00.356 04:33:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:17:00.356 04:33:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:17:00.356 04:33:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:00.356 04:33:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:17:00.356 04:33:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:17:00.356 04:33:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:17:00.356 04:33:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:17:00.356 04:33:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:00.356 04:33:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:00.356 04:33:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:17:00.356 04:33:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:17:00.356 04:33:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:17:00.356 04:33:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:00.356 04:33:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:17:00.616 04:33:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:17:00.616 04:33:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:17:00.616 04:33:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:17:00.616 04:33:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:00.616 04:33:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:00.616 04:33:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:17:00.616 04:33:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:17:00.616 04:33:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:17:00.616 04:33:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:00.616 04:33:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:17:00.875 04:33:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:17:00.876 04:33:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:17:00.876 04:33:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:17:00.876 04:33:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:00.876 04:33:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:00.876 04:33:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:17:00.876 04:33:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:17:00.876 04:33:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:17:00.876 04:33:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:00.876 04:33:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:17:01.136 04:33:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:17:01.136 04:33:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:17:01.136 04:33:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:17:01.136 04:33:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:01.136 04:33:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:01.136 04:33:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:17:01.136 04:33:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:17:01.136 04:33:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:17:01.136 04:33:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:01.136 04:33:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:17:01.396 04:33:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:17:01.396 04:33:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:17:01.396 04:33:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:17:01.396 04:33:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:01.396 04:33:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:01.396 04:33:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:17:01.396 04:33:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:17:01.396 04:33:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:17:01.396 04:33:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:17:01.396 04:33:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:01.396 04:33:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:17:01.657 04:33:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:17:01.658 04:33:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:17:01.658 04:33:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:17:01.658 04:33:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:17:01.658 04:33:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:17:01.658 04:33:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:17:01.658 04:33:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:17:01.658 04:33:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:17:01.658 04:33:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:17:01.658 04:33:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:17:01.658 04:33:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:17:01.658 04:33:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:17:01.658 04:33:58 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:17:01.658 04:33:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:01.658 04:33:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:17:01.658 04:33:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:17:01.918 malloc_lvol_verify 00:17:01.918 04:33:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:17:02.177 3a52aba2-749a-4148-a2e0-7253d7bcd3de 00:17:02.177 04:33:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:17:02.436 c5ef7bda-403c-4eb4-a102-b1886112903a 00:17:02.437 04:33:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:17:02.695 /dev/nbd0 00:17:02.695 04:33:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:17:02.695 04:33:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:17:02.695 04:33:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:17:02.695 04:33:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:17:02.695 04:33:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:17:02.695 mke2fs 1.47.0 (5-Feb-2023) 00:17:02.695 Discarding device blocks: 0/4096 done 00:17:02.695 Creating filesystem with 4096 1k blocks and 1024 inodes 00:17:02.695 00:17:02.695 Allocating group tables: 0/1 done 00:17:02.695 Writing inode tables: 0/1 done 00:17:02.695 Creating journal (1024 blocks): done 00:17:02.695 Writing superblocks and filesystem accounting information: 0/1 done 00:17:02.695 00:17:02.695 04:33:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:17:02.695 04:33:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:02.695 04:33:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:17:02.695 04:33:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:02.695 04:33:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:17:02.695 04:33:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:02.695 04:33:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:17:02.954 04:33:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:02.954 04:33:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:02.954 04:33:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:02.954 04:33:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:02.954 04:33:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:02.954 04:33:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:02.954 04:33:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:17:02.954 04:33:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:17:02.954 04:33:59 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 72358 00:17:02.954 04:33:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 72358 ']' 00:17:02.954 04:33:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 72358 00:17:02.954 04:33:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:17:02.954 04:33:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:02.954 04:33:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72358 00:17:02.954 killing process with pid 72358 00:17:02.954 04:33:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:02.954 04:33:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:02.954 04:33:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72358' 00:17:02.954 04:33:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@973 -- # kill 72358 00:17:02.954 04:33:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@978 -- # wait 72358 00:17:03.894 04:34:00 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:17:03.894 00:17:03.894 real 0m11.425s 00:17:03.894 user 0m15.521s 00:17:03.894 sys 0m3.750s 00:17:03.894 04:34:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:03.894 04:34:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:17:03.894 ************************************ 00:17:03.894 END TEST bdev_nbd 00:17:03.894 ************************************ 00:17:03.894 04:34:00 blockdev_xnvme -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:17:03.894 04:34:00 blockdev_xnvme -- bdev/blockdev.sh@801 -- # '[' xnvme = nvme ']' 00:17:03.894 04:34:00 blockdev_xnvme -- bdev/blockdev.sh@801 -- # '[' xnvme = gpt ']' 00:17:03.894 04:34:00 blockdev_xnvme -- bdev/blockdev.sh@805 -- # run_test bdev_fio fio_test_suite '' 00:17:03.894 04:34:00 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:03.894 04:34:00 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:03.894 04:34:00 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:03.894 ************************************ 00:17:03.894 START TEST bdev_fio 00:17:03.894 ************************************ 00:17:03.894 04:34:00 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1129 -- # fio_test_suite '' 00:17:03.894 04:34:00 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:17:03.894 04:34:00 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:17:03.894 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:17:03.894 04:34:00 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:17:03.894 04:34:00 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:17:03.894 04:34:00 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:17:03.894 04:34:00 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:17:03.894 04:34:00 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:17:03.894 04:34:00 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:17:03.894 04:34:00 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=verify 00:17:03.894 04:34:00 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type=AIO 00:17:03.894 04:34:00 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:17:03.894 04:34:00 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:17:03.894 04:34:00 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:17:03.894 04:34:00 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z verify ']' 00:17:03.894 04:34:00 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:17:03.894 04:34:00 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:17:03.894 04:34:00 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:17:03.894 04:34:00 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1317 -- # '[' verify == verify ']' 00:17:03.894 04:34:00 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1318 -- # cat 00:17:03.894 04:34:00 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1327 -- # '[' AIO == AIO ']' 00:17:03.894 04:34:00 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # /usr/src/fio/fio --version 00:17:03.894 04:34:00 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:17:03.894 04:34:00 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1329 -- # echo serialize_overlap=1 00:17:03.894 04:34:00 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:17:03.894 04:34:00 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n1]' 00:17:03.894 04:34:00 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n1 00:17:03.894 04:34:00 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:17:03.894 04:34:00 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n2]' 00:17:03.894 04:34:00 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n2 00:17:03.894 04:34:00 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:17:03.894 04:34:00 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n3]' 00:17:03.894 04:34:00 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n3 00:17:03.894 04:34:00 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:17:03.894 04:34:00 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme1n1]' 00:17:03.894 04:34:00 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme1n1 00:17:03.894 04:34:00 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:17:03.894 04:34:00 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n1]' 00:17:03.894 04:34:00 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n1 00:17:03.894 04:34:00 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:17:03.894 04:34:00 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme3n1]' 00:17:03.894 04:34:00 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme3n1 00:17:03.894 04:34:00 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:17:03.894 04:34:00 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:17:03.894 04:34:00 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1105 -- # '[' 11 -le 1 ']' 00:17:03.894 04:34:00 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:03.894 04:34:00 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:17:03.894 ************************************ 00:17:03.894 START TEST bdev_fio_rw_verify 00:17:03.894 ************************************ 00:17:03.894 04:34:00 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1129 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:17:03.894 04:34:00 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:17:03.894 04:34:00 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:17:03.894 04:34:00 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:17:03.894 04:34:00 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local sanitizers 00:17:03.894 04:34:00 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:03.894 04:34:00 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # shift 00:17:03.894 04:34:00 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # local asan_lib= 00:17:03.894 04:34:00 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:17:03.894 04:34:00 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # grep libasan 00:17:03.894 04:34:00 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:17:03.894 04:34:00 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:03.894 04:34:00 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:17:03.894 04:34:00 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:17:03.894 04:34:00 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1351 -- # break 00:17:03.895 04:34:00 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:17:03.895 04:34:00 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:17:04.152 job_nvme0n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:17:04.152 job_nvme0n2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:17:04.152 job_nvme0n3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:17:04.152 job_nvme1n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:17:04.152 job_nvme2n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:17:04.152 job_nvme3n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:17:04.152 fio-3.35 00:17:04.152 Starting 6 threads 00:17:16.380 00:17:16.380 job_nvme0n1: (groupid=0, jobs=6): err= 0: pid=72772: Wed Nov 27 04:34:11 2024 00:17:16.380 read: IOPS=41.5k, BW=162MiB/s (170MB/s)(1621MiB/10001msec) 00:17:16.380 slat (usec): min=2, max=934, avg= 4.79, stdev= 3.93 00:17:16.380 clat (usec): min=66, max=4685, avg=395.75, stdev=223.05 00:17:16.380 lat (usec): min=72, max=4690, avg=400.54, stdev=223.42 00:17:16.381 clat percentiles (usec): 00:17:16.381 | 50.000th=[ 355], 99.000th=[ 1074], 99.900th=[ 2180], 99.990th=[ 3851], 00:17:16.381 | 99.999th=[ 4686] 00:17:16.381 write: IOPS=41.8k, BW=163MiB/s (171MB/s)(1634MiB/10001msec); 0 zone resets 00:17:16.381 slat (usec): min=10, max=3986, avg=24.44, stdev=37.95 00:17:16.381 clat (usec): min=51, max=6049, avg=519.04, stdev=254.87 00:17:16.381 lat (usec): min=74, max=6083, avg=543.48, stdev=259.84 00:17:16.381 clat percentiles (usec): 00:17:16.381 | 50.000th=[ 478], 99.000th=[ 1303], 99.900th=[ 2278], 99.990th=[ 3752], 00:17:16.381 | 99.999th=[ 5932] 00:17:16.381 bw ( KiB/s): min=133206, max=194922, per=99.37%, avg=166301.63, stdev=2661.42, samples=114 00:17:16.381 iops : min=33301, max=48730, avg=41574.79, stdev=665.33, samples=114 00:17:16.381 lat (usec) : 100=0.10%, 250=18.21%, 500=46.44%, 750=24.62%, 1000=7.80% 00:17:16.381 lat (msec) : 2=2.69%, 4=0.12%, 10=0.01% 00:17:16.381 cpu : usr=48.97%, sys=33.06%, ctx=10172, majf=0, minf=33282 00:17:16.381 IO depths : 1=11.8%, 2=24.0%, 4=50.9%, 8=13.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:16.381 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:16.381 complete : 0=0.0%, 4=89.1%, 8=10.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:16.381 issued rwts: total=415077,418412,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:16.381 latency : target=0, window=0, percentile=100.00%, depth=8 00:17:16.381 00:17:16.381 Run status group 0 (all jobs): 00:17:16.381 READ: bw=162MiB/s (170MB/s), 162MiB/s-162MiB/s (170MB/s-170MB/s), io=1621MiB (1700MB), run=10001-10001msec 00:17:16.381 WRITE: bw=163MiB/s (171MB/s), 163MiB/s-163MiB/s (171MB/s-171MB/s), io=1634MiB (1714MB), run=10001-10001msec 00:17:16.381 ----------------------------------------------------- 00:17:16.381 Suppressions used: 00:17:16.381 count bytes template 00:17:16.381 6 48 /usr/src/fio/parse.c 00:17:16.381 3023 290208 /usr/src/fio/iolog.c 00:17:16.381 1 8 libtcmalloc_minimal.so 00:17:16.381 1 904 libcrypto.so 00:17:16.381 ----------------------------------------------------- 00:17:16.381 00:17:16.381 00:17:16.381 real 0m11.889s 00:17:16.381 user 0m30.849s 00:17:16.381 sys 0m20.126s 00:17:16.381 04:34:12 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:16.381 ************************************ 00:17:16.381 END TEST bdev_fio_rw_verify 00:17:16.381 ************************************ 00:17:16.381 04:34:12 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:17:16.381 04:34:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:17:16.381 04:34:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:17:16.381 04:34:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:17:16.381 04:34:12 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:17:16.381 04:34:12 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=trim 00:17:16.381 04:34:12 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type= 00:17:16.381 04:34:12 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:17:16.381 04:34:12 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:17:16.381 04:34:12 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:17:16.381 04:34:12 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z trim ']' 00:17:16.381 04:34:12 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:17:16.381 04:34:12 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:17:16.381 04:34:12 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:17:16.381 04:34:12 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1317 -- # '[' trim == verify ']' 00:17:16.381 04:34:12 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1332 -- # '[' trim == trim ']' 00:17:16.381 04:34:12 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1333 -- # echo rw=trimwrite 00:17:16.381 04:34:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:17:16.381 04:34:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "dee578f0-1d33-486d-b76f-7a13bfe8ffcd"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "dee578f0-1d33-486d-b76f-7a13bfe8ffcd",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n2",' ' "aliases": [' ' "e1b1a04a-aa62-4d2f-8b0d-7fa3927b77a4"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "e1b1a04a-aa62-4d2f-8b0d-7fa3927b77a4",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n3",' ' "aliases": [' ' "225513f7-cdc9-45d5-b7d1-28da4175ab1e"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "225513f7-cdc9-45d5-b7d1-28da4175ab1e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "5df7184e-d38b-4ee9-8c1f-26f7d0a2dee2"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "5df7184e-d38b-4ee9-8c1f-26f7d0a2dee2",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "596410a3-b99a-4779-9284-127b2ad665ee"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "596410a3-b99a-4779-9284-127b2ad665ee",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "cfc303e2-941c-4575-a3e5-10647388c102"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "cfc303e2-941c-4575-a3e5-10647388c102",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:17:16.381 04:34:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:17:16.381 04:34:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:17:16.381 04:34:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:17:16.381 /home/vagrant/spdk_repo/spdk 00:17:16.381 04:34:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:17:16.381 04:34:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:17:16.381 00:17:16.381 real 0m12.028s 00:17:16.381 user 0m30.920s 00:17:16.381 sys 0m20.196s 00:17:16.381 04:34:12 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:16.381 ************************************ 00:17:16.381 END TEST bdev_fio 00:17:16.381 04:34:12 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:17:16.381 ************************************ 00:17:16.381 04:34:12 blockdev_xnvme -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:17:16.381 04:34:12 blockdev_xnvme -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:17:16.381 04:34:12 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:17:16.381 04:34:12 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:16.381 04:34:12 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:16.381 ************************************ 00:17:16.381 START TEST bdev_verify 00:17:16.381 ************************************ 00:17:16.381 04:34:12 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:17:16.381 [2024-11-27 04:34:12.444537] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:17:16.381 [2024-11-27 04:34:12.444668] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72941 ] 00:17:16.381 [2024-11-27 04:34:12.594957] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:16.382 [2024-11-27 04:34:12.683101] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:16.382 [2024-11-27 04:34:12.683357] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:16.640 Running I/O for 5 seconds... 00:17:18.983 28672.00 IOPS, 112.00 MiB/s [2024-11-27T04:34:16.500Z] 27184.00 IOPS, 106.19 MiB/s [2024-11-27T04:34:17.432Z] 26197.33 IOPS, 102.33 MiB/s [2024-11-27T04:34:18.400Z] 25760.00 IOPS, 100.62 MiB/s [2024-11-27T04:34:18.400Z] 25638.40 IOPS, 100.15 MiB/s 00:17:21.813 Latency(us) 00:17:21.813 [2024-11-27T04:34:18.400Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:21.813 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:17:21.813 Verification LBA range: start 0x0 length 0x80000 00:17:21.813 nvme0n1 : 5.06 1870.77 7.31 0.00 0.00 68311.60 6906.49 60091.47 00:17:21.813 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:17:21.813 Verification LBA range: start 0x80000 length 0x80000 00:17:21.814 nvme0n1 : 5.02 1835.35 7.17 0.00 0.00 69604.30 15627.82 60091.47 00:17:21.814 Job: nvme0n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:17:21.814 Verification LBA range: start 0x0 length 0x80000 00:17:21.814 nvme0n2 : 5.02 1860.23 7.27 0.00 0.00 68586.30 13510.50 61704.66 00:17:21.814 Job: nvme0n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:17:21.814 Verification LBA range: start 0x80000 length 0x80000 00:17:21.814 nvme0n2 : 5.04 1828.69 7.14 0.00 0.00 69720.27 15022.87 66947.54 00:17:21.814 Job: nvme0n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:17:21.814 Verification LBA range: start 0x0 length 0x80000 00:17:21.814 nvme0n3 : 5.07 1868.21 7.30 0.00 0.00 68188.35 10233.70 70577.23 00:17:21.814 Job: nvme0n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:17:21.814 Verification LBA range: start 0x80000 length 0x80000 00:17:21.814 nvme0n3 : 5.04 1828.09 7.14 0.00 0.00 69598.88 11645.24 70173.93 00:17:21.814 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:17:21.814 Verification LBA range: start 0x0 length 0xa0000 00:17:21.814 nvme1n1 : 5.06 1870.14 7.31 0.00 0.00 68008.02 7914.73 62511.26 00:17:21.814 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:17:21.814 Verification LBA range: start 0xa0000 length 0xa0000 00:17:21.814 nvme1n1 : 5.07 1844.11 7.20 0.00 0.00 68848.51 9275.86 73400.32 00:17:21.814 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:17:21.814 Verification LBA range: start 0x0 length 0x20000 00:17:21.814 nvme2n1 : 5.05 1851.51 7.23 0.00 0.00 68576.72 6805.66 60494.77 00:17:21.814 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:17:21.814 Verification LBA range: start 0x20000 length 0x20000 00:17:21.814 nvme2n1 : 5.07 1843.54 7.20 0.00 0.00 68734.95 11695.66 64931.05 00:17:21.814 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:17:21.814 Verification LBA range: start 0x0 length 0xbd0bd 00:17:21.814 nvme3n1 : 5.07 3394.24 13.26 0.00 0.00 37293.21 2760.07 68157.44 00:17:21.814 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:17:21.814 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:17:21.814 nvme3n1 : 5.08 3503.83 13.69 0.00 0.00 36074.37 2533.22 56058.49 00:17:21.814 [2024-11-27T04:34:18.401Z] =================================================================================================================== 00:17:21.814 [2024-11-27T04:34:18.401Z] Total : 25398.72 99.21 0.00 0.00 60052.66 2533.22 73400.32 00:17:22.381 00:17:22.381 real 0m6.581s 00:17:22.381 user 0m10.384s 00:17:22.381 sys 0m1.803s 00:17:22.381 04:34:18 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:22.381 ************************************ 00:17:22.381 04:34:18 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:17:22.381 END TEST bdev_verify 00:17:22.381 ************************************ 00:17:22.638 04:34:18 blockdev_xnvme -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:17:22.638 04:34:18 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:17:22.638 04:34:18 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:22.638 04:34:18 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:22.638 ************************************ 00:17:22.638 START TEST bdev_verify_big_io 00:17:22.638 ************************************ 00:17:22.638 04:34:19 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:17:22.638 [2024-11-27 04:34:19.065911] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:17:22.638 [2024-11-27 04:34:19.066037] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73038 ] 00:17:22.895 [2024-11-27 04:34:19.225557] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:22.895 [2024-11-27 04:34:19.329288] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:22.895 [2024-11-27 04:34:19.329533] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:23.460 Running I/O for 5 seconds... 00:17:29.288 580.00 IOPS, 36.25 MiB/s [2024-11-27T04:34:25.875Z] 2412.00 IOPS, 150.75 MiB/s [2024-11-27T04:34:26.133Z] 2920.00 IOPS, 182.50 MiB/s 00:17:29.546 Latency(us) 00:17:29.546 [2024-11-27T04:34:26.133Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:29.546 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:17:29.546 Verification LBA range: start 0x0 length 0x8000 00:17:29.546 nvme0n1 : 5.87 128.13 8.01 0.00 0.00 974951.51 34683.67 1129235.69 00:17:29.546 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:17:29.546 Verification LBA range: start 0x8000 length 0x8000 00:17:29.546 nvme0n1 : 5.78 114.86 7.18 0.00 0.00 1073141.98 11040.30 2051982.57 00:17:29.546 Job: nvme0n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:17:29.546 Verification LBA range: start 0x0 length 0x8000 00:17:29.546 nvme0n2 : 5.95 129.18 8.07 0.00 0.00 932324.69 125022.52 1090519.04 00:17:29.546 Job: nvme0n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:17:29.546 Verification LBA range: start 0x8000 length 0x8000 00:17:29.546 nvme0n2 : 5.88 116.98 7.31 0.00 0.00 1006690.41 103244.41 1025991.29 00:17:29.546 Job: nvme0n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:17:29.546 Verification LBA range: start 0x0 length 0x8000 00:17:29.546 nvme0n3 : 5.87 98.10 6.13 0.00 0.00 1174262.29 116956.55 1755154.90 00:17:29.546 Job: nvme0n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:17:29.546 Verification LBA range: start 0x8000 length 0x8000 00:17:29.546 nvme0n3 : 5.78 132.80 8.30 0.00 0.00 861684.18 95581.74 980821.86 00:17:29.546 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:17:29.546 Verification LBA range: start 0x0 length 0xa000 00:17:29.546 nvme1n1 : 6.03 116.84 7.30 0.00 0.00 966303.40 99614.72 1309913.40 00:17:29.546 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:17:29.546 Verification LBA range: start 0xa000 length 0xa000 00:17:29.546 nvme1n1 : 6.03 137.89 8.62 0.00 0.00 811706.99 111310.38 922746.88 00:17:29.546 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:17:29.546 Verification LBA range: start 0x0 length 0x2000 00:17:29.546 nvme2n1 : 5.95 115.63 7.23 0.00 0.00 938320.75 6276.33 1948738.17 00:17:29.546 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:17:29.546 Verification LBA range: start 0x2000 length 0x2000 00:17:29.546 nvme2n1 : 6.04 116.51 7.28 0.00 0.00 935796.22 61704.66 2413337.99 00:17:29.546 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:17:29.546 Verification LBA range: start 0x0 length 0xbd0b 00:17:29.546 nvme3n1 : 6.07 173.91 10.87 0.00 0.00 605476.09 5595.77 2000360.37 00:17:29.546 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:17:29.546 Verification LBA range: start 0xbd0b length 0xbd0b 00:17:29.546 nvme3n1 : 6.05 193.21 12.08 0.00 0.00 545098.20 3453.24 858219.13 00:17:29.546 [2024-11-27T04:34:26.133Z] =================================================================================================================== 00:17:29.546 [2024-11-27T04:34:26.133Z] Total : 1574.03 98.38 0.00 0.00 867692.91 3453.24 2413337.99 00:17:30.479 00:17:30.479 real 0m7.805s 00:17:30.479 user 0m14.408s 00:17:30.479 sys 0m0.426s 00:17:30.479 ************************************ 00:17:30.479 END TEST bdev_verify_big_io 00:17:30.479 ************************************ 00:17:30.479 04:34:26 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:30.479 04:34:26 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:17:30.479 04:34:26 blockdev_xnvme -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:17:30.479 04:34:26 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:17:30.479 04:34:26 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:30.479 04:34:26 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:30.479 ************************************ 00:17:30.479 START TEST bdev_write_zeroes 00:17:30.479 ************************************ 00:17:30.479 04:34:26 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:17:30.479 [2024-11-27 04:34:26.906566] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:17:30.479 [2024-11-27 04:34:26.907235] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73151 ] 00:17:30.738 [2024-11-27 04:34:27.068043] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:30.738 [2024-11-27 04:34:27.170406] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:30.996 Running I/O for 1 seconds... 00:17:32.369 77984.00 IOPS, 304.62 MiB/s 00:17:32.369 Latency(us) 00:17:32.369 [2024-11-27T04:34:28.956Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:32.369 Job: nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:17:32.369 nvme0n1 : 1.02 11184.01 43.69 0.00 0.00 11434.68 5948.65 22988.01 00:17:32.369 Job: nvme0n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:17:32.369 nvme0n2 : 1.02 11171.30 43.64 0.00 0.00 11439.70 6125.10 23290.49 00:17:32.369 Job: nvme0n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:17:32.369 nvme0n3 : 1.02 11158.21 43.59 0.00 0.00 11444.60 6125.10 23693.78 00:17:32.369 Job: nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:17:32.369 nvme1n1 : 1.02 11145.80 43.54 0.00 0.00 11449.09 6125.10 24097.08 00:17:32.369 Job: nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:17:32.369 nvme2n1 : 1.02 11133.19 43.49 0.00 0.00 11453.91 6099.89 24399.56 00:17:32.369 Job: nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:17:32.369 nvme3n1 : 1.03 21342.18 83.37 0.00 0.00 5968.39 2495.41 15930.29 00:17:32.369 [2024-11-27T04:34:28.956Z] =================================================================================================================== 00:17:32.369 [2024-11-27T04:34:28.956Z] Total : 77134.69 301.31 0.00 0.00 9922.67 2495.41 24399.56 00:17:32.934 00:17:32.934 real 0m2.450s 00:17:32.934 user 0m1.691s 00:17:32.934 sys 0m0.587s 00:17:32.934 04:34:29 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:32.934 ************************************ 00:17:32.935 END TEST bdev_write_zeroes 00:17:32.935 ************************************ 00:17:32.935 04:34:29 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:17:32.935 04:34:29 blockdev_xnvme -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:17:32.935 04:34:29 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:17:32.935 04:34:29 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:32.935 04:34:29 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:32.935 ************************************ 00:17:32.935 START TEST bdev_json_nonenclosed 00:17:32.935 ************************************ 00:17:32.935 04:34:29 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:17:32.935 [2024-11-27 04:34:29.392682] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:17:32.935 [2024-11-27 04:34:29.393171] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73205 ] 00:17:33.235 [2024-11-27 04:34:29.554928] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:33.235 [2024-11-27 04:34:29.655069] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:33.235 [2024-11-27 04:34:29.655157] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:17:33.235 [2024-11-27 04:34:29.655174] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:17:33.235 [2024-11-27 04:34:29.655183] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:17:33.492 00:17:33.492 real 0m0.505s 00:17:33.492 user 0m0.311s 00:17:33.492 sys 0m0.089s 00:17:33.492 04:34:29 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:33.492 04:34:29 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:17:33.492 ************************************ 00:17:33.492 END TEST bdev_json_nonenclosed 00:17:33.492 ************************************ 00:17:33.492 04:34:29 blockdev_xnvme -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:17:33.492 04:34:29 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:17:33.492 04:34:29 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:33.492 04:34:29 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:33.492 ************************************ 00:17:33.492 START TEST bdev_json_nonarray 00:17:33.492 ************************************ 00:17:33.492 04:34:29 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:17:33.492 [2024-11-27 04:34:29.968346] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:17:33.492 [2024-11-27 04:34:29.968534] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73225 ] 00:17:33.750 [2024-11-27 04:34:30.143752] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:33.750 [2024-11-27 04:34:30.248320] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:33.750 [2024-11-27 04:34:30.248408] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:17:33.750 [2024-11-27 04:34:30.248423] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:17:33.750 [2024-11-27 04:34:30.248430] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:17:34.008 00:17:34.008 real 0m0.545s 00:17:34.008 user 0m0.323s 00:17:34.008 sys 0m0.116s 00:17:34.008 04:34:30 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:34.008 04:34:30 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:17:34.008 ************************************ 00:17:34.008 END TEST bdev_json_nonarray 00:17:34.008 ************************************ 00:17:34.008 04:34:30 blockdev_xnvme -- bdev/blockdev.sh@824 -- # [[ xnvme == bdev ]] 00:17:34.008 04:34:30 blockdev_xnvme -- bdev/blockdev.sh@832 -- # [[ xnvme == gpt ]] 00:17:34.008 04:34:30 blockdev_xnvme -- bdev/blockdev.sh@836 -- # [[ xnvme == crypto_sw ]] 00:17:34.008 04:34:30 blockdev_xnvme -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:17:34.008 04:34:30 blockdev_xnvme -- bdev/blockdev.sh@849 -- # cleanup 00:17:34.008 04:34:30 blockdev_xnvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:17:34.008 04:34:30 blockdev_xnvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:17:34.008 04:34:30 blockdev_xnvme -- bdev/blockdev.sh@26 -- # [[ xnvme == rbd ]] 00:17:34.008 04:34:30 blockdev_xnvme -- bdev/blockdev.sh@30 -- # [[ xnvme == daos ]] 00:17:34.008 04:34:30 blockdev_xnvme -- bdev/blockdev.sh@34 -- # [[ xnvme = \g\p\t ]] 00:17:34.008 04:34:30 blockdev_xnvme -- bdev/blockdev.sh@40 -- # [[ xnvme == xnvme ]] 00:17:34.008 04:34:30 blockdev_xnvme -- bdev/blockdev.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:17:34.577 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:18:13.312 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:18:13.312 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:18:18.574 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:18:18.574 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:18:18.574 00:18:18.574 real 1m33.399s 00:18:18.574 user 1m25.934s 00:18:18.574 sys 1m31.736s 00:18:18.574 04:35:14 blockdev_xnvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:18.574 04:35:14 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:18:18.574 ************************************ 00:18:18.574 END TEST blockdev_xnvme 00:18:18.574 ************************************ 00:18:18.574 04:35:14 -- spdk/autotest.sh@247 -- # run_test ublk /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:18:18.574 04:35:14 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:18.574 04:35:14 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:18.574 04:35:14 -- common/autotest_common.sh@10 -- # set +x 00:18:18.574 ************************************ 00:18:18.574 START TEST ublk 00:18:18.574 ************************************ 00:18:18.574 04:35:14 ublk -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:18:18.574 * Looking for test storage... 00:18:18.574 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:18:18.574 04:35:14 ublk -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:18.574 04:35:14 ublk -- common/autotest_common.sh@1693 -- # lcov --version 00:18:18.574 04:35:14 ublk -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:18.574 04:35:14 ublk -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:18.574 04:35:14 ublk -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:18.574 04:35:14 ublk -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:18.574 04:35:14 ublk -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:18.574 04:35:14 ublk -- scripts/common.sh@336 -- # IFS=.-: 00:18:18.574 04:35:14 ublk -- scripts/common.sh@336 -- # read -ra ver1 00:18:18.574 04:35:14 ublk -- scripts/common.sh@337 -- # IFS=.-: 00:18:18.574 04:35:14 ublk -- scripts/common.sh@337 -- # read -ra ver2 00:18:18.574 04:35:14 ublk -- scripts/common.sh@338 -- # local 'op=<' 00:18:18.574 04:35:14 ublk -- scripts/common.sh@340 -- # ver1_l=2 00:18:18.574 04:35:14 ublk -- scripts/common.sh@341 -- # ver2_l=1 00:18:18.574 04:35:14 ublk -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:18.574 04:35:14 ublk -- scripts/common.sh@344 -- # case "$op" in 00:18:18.574 04:35:14 ublk -- scripts/common.sh@345 -- # : 1 00:18:18.574 04:35:14 ublk -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:18.574 04:35:14 ublk -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:18.574 04:35:14 ublk -- scripts/common.sh@365 -- # decimal 1 00:18:18.574 04:35:14 ublk -- scripts/common.sh@353 -- # local d=1 00:18:18.574 04:35:14 ublk -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:18.574 04:35:14 ublk -- scripts/common.sh@355 -- # echo 1 00:18:18.574 04:35:14 ublk -- scripts/common.sh@365 -- # ver1[v]=1 00:18:18.574 04:35:14 ublk -- scripts/common.sh@366 -- # decimal 2 00:18:18.574 04:35:14 ublk -- scripts/common.sh@353 -- # local d=2 00:18:18.574 04:35:14 ublk -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:18.574 04:35:14 ublk -- scripts/common.sh@355 -- # echo 2 00:18:18.574 04:35:14 ublk -- scripts/common.sh@366 -- # ver2[v]=2 00:18:18.574 04:35:14 ublk -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:18.574 04:35:14 ublk -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:18.574 04:35:14 ublk -- scripts/common.sh@368 -- # return 0 00:18:18.574 04:35:14 ublk -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:18.574 04:35:14 ublk -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:18.574 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:18.574 --rc genhtml_branch_coverage=1 00:18:18.574 --rc genhtml_function_coverage=1 00:18:18.574 --rc genhtml_legend=1 00:18:18.574 --rc geninfo_all_blocks=1 00:18:18.574 --rc geninfo_unexecuted_blocks=1 00:18:18.574 00:18:18.574 ' 00:18:18.574 04:35:14 ublk -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:18.574 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:18.574 --rc genhtml_branch_coverage=1 00:18:18.574 --rc genhtml_function_coverage=1 00:18:18.574 --rc genhtml_legend=1 00:18:18.574 --rc geninfo_all_blocks=1 00:18:18.574 --rc geninfo_unexecuted_blocks=1 00:18:18.574 00:18:18.574 ' 00:18:18.574 04:35:14 ublk -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:18.574 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:18.574 --rc genhtml_branch_coverage=1 00:18:18.575 --rc genhtml_function_coverage=1 00:18:18.575 --rc genhtml_legend=1 00:18:18.575 --rc geninfo_all_blocks=1 00:18:18.575 --rc geninfo_unexecuted_blocks=1 00:18:18.575 00:18:18.575 ' 00:18:18.575 04:35:14 ublk -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:18.575 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:18.575 --rc genhtml_branch_coverage=1 00:18:18.575 --rc genhtml_function_coverage=1 00:18:18.575 --rc genhtml_legend=1 00:18:18.575 --rc geninfo_all_blocks=1 00:18:18.575 --rc geninfo_unexecuted_blocks=1 00:18:18.575 00:18:18.575 ' 00:18:18.575 04:35:14 ublk -- ublk/ublk.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:18:18.575 04:35:14 ublk -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:18:18.575 04:35:14 ublk -- lvol/common.sh@7 -- # MALLOC_BS=512 00:18:18.575 04:35:14 ublk -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:18:18.575 04:35:14 ublk -- lvol/common.sh@9 -- # AIO_BS=4096 00:18:18.575 04:35:14 ublk -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:18:18.575 04:35:14 ublk -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:18:18.575 04:35:14 ublk -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:18:18.575 04:35:14 ublk -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:18:18.575 04:35:14 ublk -- ublk/ublk.sh@11 -- # [[ -z '' ]] 00:18:18.575 04:35:14 ublk -- ublk/ublk.sh@12 -- # NUM_DEVS=4 00:18:18.575 04:35:14 ublk -- ublk/ublk.sh@13 -- # NUM_QUEUE=4 00:18:18.575 04:35:14 ublk -- ublk/ublk.sh@14 -- # QUEUE_DEPTH=512 00:18:18.575 04:35:14 ublk -- ublk/ublk.sh@15 -- # MALLOC_SIZE_MB=128 00:18:18.575 04:35:14 ublk -- ublk/ublk.sh@17 -- # STOP_DISKS=1 00:18:18.575 04:35:14 ublk -- ublk/ublk.sh@27 -- # MALLOC_BS=4096 00:18:18.575 04:35:14 ublk -- ublk/ublk.sh@28 -- # FILE_SIZE=134217728 00:18:18.575 04:35:14 ublk -- ublk/ublk.sh@29 -- # MAX_DEV_ID=3 00:18:18.575 04:35:14 ublk -- ublk/ublk.sh@133 -- # modprobe ublk_drv 00:18:18.575 04:35:14 ublk -- ublk/ublk.sh@136 -- # run_test test_save_ublk_config test_save_config 00:18:18.575 04:35:14 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:18.575 04:35:14 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:18.575 04:35:14 ublk -- common/autotest_common.sh@10 -- # set +x 00:18:18.575 ************************************ 00:18:18.575 START TEST test_save_ublk_config 00:18:18.575 ************************************ 00:18:18.575 04:35:14 ublk.test_save_ublk_config -- common/autotest_common.sh@1129 -- # test_save_config 00:18:18.575 04:35:14 ublk.test_save_ublk_config -- ublk/ublk.sh@100 -- # local tgtpid blkpath config 00:18:18.575 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:18.575 04:35:14 ublk.test_save_ublk_config -- ublk/ublk.sh@103 -- # tgtpid=73547 00:18:18.575 04:35:14 ublk.test_save_ublk_config -- ublk/ublk.sh@102 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk 00:18:18.575 04:35:14 ublk.test_save_ublk_config -- ublk/ublk.sh@104 -- # trap 'killprocess $tgtpid' EXIT 00:18:18.575 04:35:14 ublk.test_save_ublk_config -- ublk/ublk.sh@106 -- # waitforlisten 73547 00:18:18.575 04:35:14 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # '[' -z 73547 ']' 00:18:18.575 04:35:14 ublk.test_save_ublk_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:18.575 04:35:14 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:18.575 04:35:14 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:18.575 04:35:14 ublk.test_save_ublk_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:18.575 04:35:14 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:18:18.575 [2024-11-27 04:35:14.785737] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:18:18.575 [2024-11-27 04:35:14.785856] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73547 ] 00:18:18.575 [2024-11-27 04:35:14.947371] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:18.575 [2024-11-27 04:35:15.049708] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:19.151 04:35:15 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:19.151 04:35:15 ublk.test_save_ublk_config -- common/autotest_common.sh@868 -- # return 0 00:18:19.151 04:35:15 ublk.test_save_ublk_config -- ublk/ublk.sh@107 -- # blkpath=/dev/ublkb0 00:18:19.151 04:35:15 ublk.test_save_ublk_config -- ublk/ublk.sh@108 -- # rpc_cmd 00:18:19.151 04:35:15 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.151 04:35:15 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:18:19.151 [2024-11-27 04:35:15.677746] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:18:19.151 [2024-11-27 04:35:15.678569] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:18:19.151 malloc0 00:18:19.409 [2024-11-27 04:35:15.741892] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:18:19.409 [2024-11-27 04:35:15.741985] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:18:19.409 [2024-11-27 04:35:15.741994] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:18:19.409 [2024-11-27 04:35:15.742002] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:18:19.409 [2024-11-27 04:35:15.749893] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:18:19.409 [2024-11-27 04:35:15.749923] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:18:19.409 [2024-11-27 04:35:15.757754] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:18:19.409 [2024-11-27 04:35:15.757873] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:18:19.409 [2024-11-27 04:35:15.774751] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:18:19.409 0 00:18:19.409 04:35:15 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.409 04:35:15 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # rpc_cmd save_config 00:18:19.409 04:35:15 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.409 04:35:15 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:18:19.666 04:35:16 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.666 04:35:16 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # config='{ 00:18:19.666 "subsystems": [ 00:18:19.666 { 00:18:19.667 "subsystem": "fsdev", 00:18:19.667 "config": [ 00:18:19.667 { 00:18:19.667 "method": "fsdev_set_opts", 00:18:19.667 "params": { 00:18:19.667 "fsdev_io_pool_size": 65535, 00:18:19.667 "fsdev_io_cache_size": 256 00:18:19.667 } 00:18:19.667 } 00:18:19.667 ] 00:18:19.667 }, 00:18:19.667 { 00:18:19.667 "subsystem": "keyring", 00:18:19.667 "config": [] 00:18:19.667 }, 00:18:19.667 { 00:18:19.667 "subsystem": "iobuf", 00:18:19.667 "config": [ 00:18:19.667 { 00:18:19.667 "method": "iobuf_set_options", 00:18:19.667 "params": { 00:18:19.667 "small_pool_count": 8192, 00:18:19.667 "large_pool_count": 1024, 00:18:19.667 "small_bufsize": 8192, 00:18:19.667 "large_bufsize": 135168, 00:18:19.667 "enable_numa": false 00:18:19.667 } 00:18:19.667 } 00:18:19.667 ] 00:18:19.667 }, 00:18:19.667 { 00:18:19.667 "subsystem": "sock", 00:18:19.667 "config": [ 00:18:19.667 { 00:18:19.667 "method": "sock_set_default_impl", 00:18:19.667 "params": { 00:18:19.667 "impl_name": "posix" 00:18:19.667 } 00:18:19.667 }, 00:18:19.667 { 00:18:19.667 "method": "sock_impl_set_options", 00:18:19.667 "params": { 00:18:19.667 "impl_name": "ssl", 00:18:19.667 "recv_buf_size": 4096, 00:18:19.667 "send_buf_size": 4096, 00:18:19.667 "enable_recv_pipe": true, 00:18:19.667 "enable_quickack": false, 00:18:19.667 "enable_placement_id": 0, 00:18:19.667 "enable_zerocopy_send_server": true, 00:18:19.667 "enable_zerocopy_send_client": false, 00:18:19.667 "zerocopy_threshold": 0, 00:18:19.667 "tls_version": 0, 00:18:19.667 "enable_ktls": false 00:18:19.667 } 00:18:19.667 }, 00:18:19.667 { 00:18:19.667 "method": "sock_impl_set_options", 00:18:19.667 "params": { 00:18:19.667 "impl_name": "posix", 00:18:19.667 "recv_buf_size": 2097152, 00:18:19.667 "send_buf_size": 2097152, 00:18:19.667 "enable_recv_pipe": true, 00:18:19.667 "enable_quickack": false, 00:18:19.667 "enable_placement_id": 0, 00:18:19.667 "enable_zerocopy_send_server": true, 00:18:19.667 "enable_zerocopy_send_client": false, 00:18:19.667 "zerocopy_threshold": 0, 00:18:19.667 "tls_version": 0, 00:18:19.667 "enable_ktls": false 00:18:19.667 } 00:18:19.667 } 00:18:19.667 ] 00:18:19.667 }, 00:18:19.667 { 00:18:19.667 "subsystem": "vmd", 00:18:19.667 "config": [] 00:18:19.667 }, 00:18:19.667 { 00:18:19.667 "subsystem": "accel", 00:18:19.667 "config": [ 00:18:19.667 { 00:18:19.667 "method": "accel_set_options", 00:18:19.667 "params": { 00:18:19.667 "small_cache_size": 128, 00:18:19.667 "large_cache_size": 16, 00:18:19.667 "task_count": 2048, 00:18:19.667 "sequence_count": 2048, 00:18:19.667 "buf_count": 2048 00:18:19.667 } 00:18:19.667 } 00:18:19.667 ] 00:18:19.667 }, 00:18:19.667 { 00:18:19.667 "subsystem": "bdev", 00:18:19.667 "config": [ 00:18:19.667 { 00:18:19.667 "method": "bdev_set_options", 00:18:19.667 "params": { 00:18:19.667 "bdev_io_pool_size": 65535, 00:18:19.667 "bdev_io_cache_size": 256, 00:18:19.667 "bdev_auto_examine": true, 00:18:19.667 "iobuf_small_cache_size": 128, 00:18:19.667 "iobuf_large_cache_size": 16 00:18:19.667 } 00:18:19.667 }, 00:18:19.667 { 00:18:19.667 "method": "bdev_raid_set_options", 00:18:19.667 "params": { 00:18:19.667 "process_window_size_kb": 1024, 00:18:19.667 "process_max_bandwidth_mb_sec": 0 00:18:19.667 } 00:18:19.667 }, 00:18:19.667 { 00:18:19.667 "method": "bdev_iscsi_set_options", 00:18:19.667 "params": { 00:18:19.667 "timeout_sec": 30 00:18:19.667 } 00:18:19.667 }, 00:18:19.667 { 00:18:19.667 "method": "bdev_nvme_set_options", 00:18:19.667 "params": { 00:18:19.667 "action_on_timeout": "none", 00:18:19.667 "timeout_us": 0, 00:18:19.667 "timeout_admin_us": 0, 00:18:19.667 "keep_alive_timeout_ms": 10000, 00:18:19.667 "arbitration_burst": 0, 00:18:19.667 "low_priority_weight": 0, 00:18:19.667 "medium_priority_weight": 0, 00:18:19.667 "high_priority_weight": 0, 00:18:19.667 "nvme_adminq_poll_period_us": 10000, 00:18:19.667 "nvme_ioq_poll_period_us": 0, 00:18:19.667 "io_queue_requests": 0, 00:18:19.667 "delay_cmd_submit": true, 00:18:19.667 "transport_retry_count": 4, 00:18:19.667 "bdev_retry_count": 3, 00:18:19.667 "transport_ack_timeout": 0, 00:18:19.667 "ctrlr_loss_timeout_sec": 0, 00:18:19.667 "reconnect_delay_sec": 0, 00:18:19.667 "fast_io_fail_timeout_sec": 0, 00:18:19.667 "disable_auto_failback": false, 00:18:19.667 "generate_uuids": false, 00:18:19.667 "transport_tos": 0, 00:18:19.667 "nvme_error_stat": false, 00:18:19.667 "rdma_srq_size": 0, 00:18:19.667 "io_path_stat": false, 00:18:19.667 "allow_accel_sequence": false, 00:18:19.667 "rdma_max_cq_size": 0, 00:18:19.667 "rdma_cm_event_timeout_ms": 0, 00:18:19.667 "dhchap_digests": [ 00:18:19.667 "sha256", 00:18:19.667 "sha384", 00:18:19.667 "sha512" 00:18:19.667 ], 00:18:19.667 "dhchap_dhgroups": [ 00:18:19.667 "null", 00:18:19.667 "ffdhe2048", 00:18:19.667 "ffdhe3072", 00:18:19.667 "ffdhe4096", 00:18:19.667 "ffdhe6144", 00:18:19.667 "ffdhe8192" 00:18:19.667 ] 00:18:19.667 } 00:18:19.667 }, 00:18:19.667 { 00:18:19.667 "method": "bdev_nvme_set_hotplug", 00:18:19.667 "params": { 00:18:19.667 "period_us": 100000, 00:18:19.667 "enable": false 00:18:19.667 } 00:18:19.667 }, 00:18:19.667 { 00:18:19.667 "method": "bdev_malloc_create", 00:18:19.667 "params": { 00:18:19.667 "name": "malloc0", 00:18:19.667 "num_blocks": 8192, 00:18:19.667 "block_size": 4096, 00:18:19.667 "physical_block_size": 4096, 00:18:19.667 "uuid": "d20be550-74ac-4c20-bacb-4c5a6cd86c8a", 00:18:19.667 "optimal_io_boundary": 0, 00:18:19.667 "md_size": 0, 00:18:19.667 "dif_type": 0, 00:18:19.667 "dif_is_head_of_md": false, 00:18:19.667 "dif_pi_format": 0 00:18:19.667 } 00:18:19.667 }, 00:18:19.667 { 00:18:19.667 "method": "bdev_wait_for_examine" 00:18:19.667 } 00:18:19.667 ] 00:18:19.667 }, 00:18:19.667 { 00:18:19.667 "subsystem": "scsi", 00:18:19.667 "config": null 00:18:19.667 }, 00:18:19.667 { 00:18:19.667 "subsystem": "scheduler", 00:18:19.667 "config": [ 00:18:19.667 { 00:18:19.667 "method": "framework_set_scheduler", 00:18:19.667 "params": { 00:18:19.667 "name": "static" 00:18:19.667 } 00:18:19.667 } 00:18:19.667 ] 00:18:19.667 }, 00:18:19.667 { 00:18:19.667 "subsystem": "vhost_scsi", 00:18:19.667 "config": [] 00:18:19.667 }, 00:18:19.667 { 00:18:19.667 "subsystem": "vhost_blk", 00:18:19.667 "config": [] 00:18:19.667 }, 00:18:19.667 { 00:18:19.667 "subsystem": "ublk", 00:18:19.667 "config": [ 00:18:19.667 { 00:18:19.667 "method": "ublk_create_target", 00:18:19.667 "params": { 00:18:19.667 "cpumask": "1" 00:18:19.667 } 00:18:19.667 }, 00:18:19.667 { 00:18:19.667 "method": "ublk_start_disk", 00:18:19.667 "params": { 00:18:19.667 "bdev_name": "malloc0", 00:18:19.667 "ublk_id": 0, 00:18:19.667 "num_queues": 1, 00:18:19.667 "queue_depth": 128 00:18:19.667 } 00:18:19.667 } 00:18:19.667 ] 00:18:19.667 }, 00:18:19.667 { 00:18:19.667 "subsystem": "nbd", 00:18:19.667 "config": [] 00:18:19.667 }, 00:18:19.667 { 00:18:19.667 "subsystem": "nvmf", 00:18:19.667 "config": [ 00:18:19.667 { 00:18:19.667 "method": "nvmf_set_config", 00:18:19.667 "params": { 00:18:19.667 "discovery_filter": "match_any", 00:18:19.667 "admin_cmd_passthru": { 00:18:19.667 "identify_ctrlr": false 00:18:19.667 }, 00:18:19.667 "dhchap_digests": [ 00:18:19.667 "sha256", 00:18:19.667 "sha384", 00:18:19.667 "sha512" 00:18:19.667 ], 00:18:19.667 "dhchap_dhgroups": [ 00:18:19.667 "null", 00:18:19.667 "ffdhe2048", 00:18:19.667 "ffdhe3072", 00:18:19.667 "ffdhe4096", 00:18:19.667 "ffdhe6144", 00:18:19.667 "ffdhe8192" 00:18:19.667 ] 00:18:19.667 } 00:18:19.667 }, 00:18:19.667 { 00:18:19.667 "method": "nvmf_set_max_subsystems", 00:18:19.667 "params": { 00:18:19.667 "max_subsystems": 1024 00:18:19.667 } 00:18:19.667 }, 00:18:19.667 { 00:18:19.667 "method": "nvmf_set_crdt", 00:18:19.667 "params": { 00:18:19.667 "crdt1": 0, 00:18:19.667 "crdt2": 0, 00:18:19.667 "crdt3": 0 00:18:19.667 } 00:18:19.667 } 00:18:19.667 ] 00:18:19.667 }, 00:18:19.667 { 00:18:19.667 "subsystem": "iscsi", 00:18:19.667 "config": [ 00:18:19.667 { 00:18:19.667 "method": "iscsi_set_options", 00:18:19.667 "params": { 00:18:19.667 "node_base": "iqn.2016-06.io.spdk", 00:18:19.667 "max_sessions": 128, 00:18:19.667 "max_connections_per_session": 2, 00:18:19.667 "max_queue_depth": 64, 00:18:19.667 "default_time2wait": 2, 00:18:19.667 "default_time2retain": 20, 00:18:19.668 "first_burst_length": 8192, 00:18:19.668 "immediate_data": true, 00:18:19.668 "allow_duplicated_isid": false, 00:18:19.668 "error_recovery_level": 0, 00:18:19.668 "nop_timeout": 60, 00:18:19.668 "nop_in_interval": 30, 00:18:19.668 "disable_chap": false, 00:18:19.668 "require_chap": false, 00:18:19.668 "mutual_chap": false, 00:18:19.668 "chap_group": 0, 00:18:19.668 "max_large_datain_per_connection": 64, 00:18:19.668 "max_r2t_per_connection": 4, 00:18:19.668 "pdu_pool_size": 36864, 00:18:19.668 "immediate_data_pool_size": 16384, 00:18:19.668 "data_out_pool_size": 2048 00:18:19.668 } 00:18:19.668 } 00:18:19.668 ] 00:18:19.668 } 00:18:19.668 ] 00:18:19.668 }' 00:18:19.668 04:35:16 ublk.test_save_ublk_config -- ublk/ublk.sh@116 -- # killprocess 73547 00:18:19.668 04:35:16 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # '[' -z 73547 ']' 00:18:19.668 04:35:16 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # kill -0 73547 00:18:19.668 04:35:16 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # uname 00:18:19.668 04:35:16 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:19.668 04:35:16 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73547 00:18:19.668 killing process with pid 73547 00:18:19.668 04:35:16 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:19.668 04:35:16 ublk.test_save_ublk_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:19.668 04:35:16 ublk.test_save_ublk_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73547' 00:18:19.668 04:35:16 ublk.test_save_ublk_config -- common/autotest_common.sh@973 -- # kill 73547 00:18:19.668 04:35:16 ublk.test_save_ublk_config -- common/autotest_common.sh@978 -- # wait 73547 00:18:21.040 [2024-11-27 04:35:17.491882] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:18:21.040 [2024-11-27 04:35:17.539837] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:18:21.040 [2024-11-27 04:35:17.539974] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:18:21.040 [2024-11-27 04:35:17.547777] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:18:21.040 [2024-11-27 04:35:17.547835] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:18:21.040 [2024-11-27 04:35:17.547848] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:18:21.040 [2024-11-27 04:35:17.547871] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:18:21.040 [2024-11-27 04:35:17.548015] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:18:22.415 04:35:18 ublk.test_save_ublk_config -- ublk/ublk.sh@119 -- # tgtpid=73603 00:18:22.415 04:35:18 ublk.test_save_ublk_config -- ublk/ublk.sh@121 -- # waitforlisten 73603 00:18:22.415 04:35:18 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # '[' -z 73603 ']' 00:18:22.415 04:35:18 ublk.test_save_ublk_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:22.415 04:35:18 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk -c /dev/fd/63 00:18:22.415 04:35:18 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:22.415 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:22.415 04:35:18 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:22.415 04:35:18 ublk.test_save_ublk_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:22.415 04:35:18 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:18:22.415 04:35:18 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # echo '{ 00:18:22.415 "subsystems": [ 00:18:22.415 { 00:18:22.415 "subsystem": "fsdev", 00:18:22.415 "config": [ 00:18:22.415 { 00:18:22.415 "method": "fsdev_set_opts", 00:18:22.415 "params": { 00:18:22.415 "fsdev_io_pool_size": 65535, 00:18:22.415 "fsdev_io_cache_size": 256 00:18:22.415 } 00:18:22.415 } 00:18:22.415 ] 00:18:22.415 }, 00:18:22.415 { 00:18:22.415 "subsystem": "keyring", 00:18:22.415 "config": [] 00:18:22.415 }, 00:18:22.415 { 00:18:22.415 "subsystem": "iobuf", 00:18:22.415 "config": [ 00:18:22.415 { 00:18:22.415 "method": "iobuf_set_options", 00:18:22.415 "params": { 00:18:22.415 "small_pool_count": 8192, 00:18:22.415 "large_pool_count": 1024, 00:18:22.415 "small_bufsize": 8192, 00:18:22.415 "large_bufsize": 135168, 00:18:22.415 "enable_numa": false 00:18:22.415 } 00:18:22.415 } 00:18:22.415 ] 00:18:22.415 }, 00:18:22.415 { 00:18:22.415 "subsystem": "sock", 00:18:22.415 "config": [ 00:18:22.415 { 00:18:22.415 "method": "sock_set_default_impl", 00:18:22.415 "params": { 00:18:22.415 "impl_name": "posix" 00:18:22.415 } 00:18:22.415 }, 00:18:22.415 { 00:18:22.415 "method": "sock_impl_set_options", 00:18:22.415 "params": { 00:18:22.415 "impl_name": "ssl", 00:18:22.415 "recv_buf_size": 4096, 00:18:22.415 "send_buf_size": 4096, 00:18:22.415 "enable_recv_pipe": true, 00:18:22.415 "enable_quickack": false, 00:18:22.415 "enable_placement_id": 0, 00:18:22.415 "enable_zerocopy_send_server": true, 00:18:22.415 "enable_zerocopy_send_client": false, 00:18:22.415 "zerocopy_threshold": 0, 00:18:22.415 "tls_version": 0, 00:18:22.415 "enable_ktls": false 00:18:22.415 } 00:18:22.415 }, 00:18:22.415 { 00:18:22.415 "method": "sock_impl_set_options", 00:18:22.415 "params": { 00:18:22.415 "impl_name": "posix", 00:18:22.415 "recv_buf_size": 2097152, 00:18:22.415 "send_buf_size": 2097152, 00:18:22.415 "enable_recv_pipe": true, 00:18:22.415 "enable_quickack": false, 00:18:22.415 "enable_placement_id": 0, 00:18:22.415 "enable_zerocopy_send_server": true, 00:18:22.415 "enable_zerocopy_send_client": false, 00:18:22.415 "zerocopy_threshold": 0, 00:18:22.415 "tls_version": 0, 00:18:22.415 "enable_ktls": false 00:18:22.415 } 00:18:22.415 } 00:18:22.415 ] 00:18:22.415 }, 00:18:22.415 { 00:18:22.415 "subsystem": "vmd", 00:18:22.415 "config": [] 00:18:22.415 }, 00:18:22.415 { 00:18:22.415 "subsystem": "accel", 00:18:22.415 "config": [ 00:18:22.415 { 00:18:22.415 "method": "accel_set_options", 00:18:22.415 "params": { 00:18:22.415 "small_cache_size": 128, 00:18:22.415 "large_cache_size": 16, 00:18:22.415 "task_count": 2048, 00:18:22.415 "sequence_count": 2048, 00:18:22.415 "buf_count": 2048 00:18:22.415 } 00:18:22.415 } 00:18:22.415 ] 00:18:22.415 }, 00:18:22.415 { 00:18:22.415 "subsystem": "bdev", 00:18:22.415 "config": [ 00:18:22.415 { 00:18:22.415 "method": "bdev_set_options", 00:18:22.415 "params": { 00:18:22.416 "bdev_io_pool_size": 65535, 00:18:22.416 "bdev_io_cache_size": 256, 00:18:22.416 "bdev_auto_examine": true, 00:18:22.416 "iobuf_small_cache_size": 128, 00:18:22.416 "iobuf_large_cache_size": 16 00:18:22.416 } 00:18:22.416 }, 00:18:22.416 { 00:18:22.416 "method": "bdev_raid_set_options", 00:18:22.416 "params": { 00:18:22.416 "process_window_size_kb": 1024, 00:18:22.416 "process_max_bandwidth_mb_sec": 0 00:18:22.416 } 00:18:22.416 }, 00:18:22.416 { 00:18:22.416 "method": "bdev_iscsi_set_options", 00:18:22.416 "params": { 00:18:22.416 "timeout_sec": 30 00:18:22.416 } 00:18:22.416 }, 00:18:22.416 { 00:18:22.416 "method": "bdev_nvme_set_options", 00:18:22.416 "params": { 00:18:22.416 "action_on_timeout": "none", 00:18:22.416 "timeout_us": 0, 00:18:22.416 "timeout_admin_us": 0, 00:18:22.416 "keep_alive_timeout_ms": 10000, 00:18:22.416 "arbitration_burst": 0, 00:18:22.416 "low_priority_weight": 0, 00:18:22.416 "medium_priority_weight": 0, 00:18:22.416 "high_priority_weight": 0, 00:18:22.416 "nvme_adminq_poll_period_us": 10000, 00:18:22.416 "nvme_ioq_poll_period_us": 0, 00:18:22.416 "io_queue_requests": 0, 00:18:22.416 "delay_cmd_submit": true, 00:18:22.416 "transport_retry_count": 4, 00:18:22.416 "bdev_retry_count": 3, 00:18:22.416 "transport_ack_timeout": 0, 00:18:22.416 "ctrlr_loss_timeout_sec": 0, 00:18:22.416 "reconnect_delay_sec": 0, 00:18:22.416 "fast_io_fail_timeout_sec": 0, 00:18:22.416 "disable_auto_failback": false, 00:18:22.416 "generate_uuids": false, 00:18:22.416 "transport_tos": 0, 00:18:22.416 "nvme_error_stat": false, 00:18:22.416 "rdma_srq_size": 0, 00:18:22.416 "io_path_stat": false, 00:18:22.416 "allow_accel_sequence": false, 00:18:22.416 "rdma_max_cq_size": 0, 00:18:22.416 "rdma_cm_event_timeout_ms": 0, 00:18:22.416 "dhchap_digests": [ 00:18:22.416 "sha256", 00:18:22.416 "sha384", 00:18:22.416 "sha512" 00:18:22.416 ], 00:18:22.416 "dhchap_dhgroups": [ 00:18:22.416 "null", 00:18:22.416 "ffdhe2048", 00:18:22.416 "ffdhe3072", 00:18:22.416 "ffdhe4096", 00:18:22.416 "ffdhe6144", 00:18:22.416 "ffdhe8192" 00:18:22.416 ] 00:18:22.416 } 00:18:22.416 }, 00:18:22.416 { 00:18:22.416 "method": "bdev_nvme_set_hotplug", 00:18:22.416 "params": { 00:18:22.416 "period_us": 100000, 00:18:22.416 "enable": false 00:18:22.416 } 00:18:22.416 }, 00:18:22.416 { 00:18:22.416 "method": "bdev_malloc_create", 00:18:22.416 "params": { 00:18:22.416 "name": "malloc0", 00:18:22.416 "num_blocks": 8192, 00:18:22.416 "block_size": 4096, 00:18:22.416 "physical_block_size": 4096, 00:18:22.416 "uuid": "d20be550-74ac-4c20-bacb-4c5a6cd86c8a", 00:18:22.416 "optimal_io_boundary": 0, 00:18:22.416 "md_size": 0, 00:18:22.416 "dif_type": 0, 00:18:22.416 "dif_is_head_of_md": false, 00:18:22.416 "dif_pi_format": 0 00:18:22.416 } 00:18:22.416 }, 00:18:22.416 { 00:18:22.416 "method": "bdev_wait_for_examine" 00:18:22.416 } 00:18:22.416 ] 00:18:22.416 }, 00:18:22.416 { 00:18:22.416 "subsystem": "scsi", 00:18:22.416 "config": null 00:18:22.416 }, 00:18:22.416 { 00:18:22.416 "subsystem": "scheduler", 00:18:22.416 "config": [ 00:18:22.416 { 00:18:22.416 "method": "framework_set_scheduler", 00:18:22.416 "params": { 00:18:22.416 "name": "static" 00:18:22.416 } 00:18:22.416 } 00:18:22.416 ] 00:18:22.416 }, 00:18:22.416 { 00:18:22.416 "subsystem": "vhost_scsi", 00:18:22.416 "config": [] 00:18:22.416 }, 00:18:22.416 { 00:18:22.416 "subsystem": "vhost_blk", 00:18:22.416 "config": [] 00:18:22.416 }, 00:18:22.416 { 00:18:22.416 "subsystem": "ublk", 00:18:22.416 "config": [ 00:18:22.416 { 00:18:22.416 "method": "ublk_create_target", 00:18:22.416 "params": { 00:18:22.416 "cpumask": "1" 00:18:22.416 } 00:18:22.416 }, 00:18:22.416 { 00:18:22.416 "method": "ublk_start_disk", 00:18:22.416 "params": { 00:18:22.416 "bdev_name": "malloc0", 00:18:22.416 "ublk_id": 0, 00:18:22.416 "num_queues": 1, 00:18:22.416 "queue_depth": 128 00:18:22.416 } 00:18:22.416 } 00:18:22.416 ] 00:18:22.416 }, 00:18:22.416 { 00:18:22.416 "subsystem": "nbd", 00:18:22.416 "config": [] 00:18:22.416 }, 00:18:22.416 { 00:18:22.416 "subsystem": "nvmf", 00:18:22.416 "config": [ 00:18:22.416 { 00:18:22.416 "method": "nvmf_set_config", 00:18:22.416 "params": { 00:18:22.416 "discovery_filter": "match_any", 00:18:22.416 "admin_cmd_passthru": { 00:18:22.416 "identify_ctrlr": false 00:18:22.416 }, 00:18:22.416 "dhchap_digests": [ 00:18:22.416 "sha256", 00:18:22.416 "sha384", 00:18:22.416 "sha512" 00:18:22.416 ], 00:18:22.416 "dhchap_dhgroups": [ 00:18:22.416 "null", 00:18:22.416 "ffdhe2048", 00:18:22.416 "ffdhe3072", 00:18:22.416 "ffdhe4096", 00:18:22.416 "ffdhe6144", 00:18:22.416 "ffdhe8192" 00:18:22.416 ] 00:18:22.416 } 00:18:22.416 }, 00:18:22.416 { 00:18:22.416 "method": "nvmf_set_max_subsystems", 00:18:22.416 "params": { 00:18:22.416 "max_subsystems": 1024 00:18:22.416 } 00:18:22.416 }, 00:18:22.416 { 00:18:22.416 "method": "nvmf_set_crdt", 00:18:22.416 "params": { 00:18:22.416 "crdt1": 0, 00:18:22.416 "crdt2": 0, 00:18:22.416 "crdt3": 0 00:18:22.416 } 00:18:22.416 } 00:18:22.416 ] 00:18:22.416 }, 00:18:22.416 { 00:18:22.416 "subsystem": "iscsi", 00:18:22.416 "config": [ 00:18:22.416 { 00:18:22.416 "method": "iscsi_set_options", 00:18:22.416 "params": { 00:18:22.416 "node_base": "iqn.2016-06.io.spdk", 00:18:22.416 "max_sessions": 128, 00:18:22.416 "max_connections_per_session": 2, 00:18:22.416 "max_queue_depth": 64, 00:18:22.416 "default_time2wait": 2, 00:18:22.416 "default_time2retain": 20, 00:18:22.416 "first_burst_length": 8192, 00:18:22.416 "immediate_data": true, 00:18:22.416 "allow_duplicated_isid": false, 00:18:22.416 "error_recovery_level": 0, 00:18:22.416 "nop_timeout": 60, 00:18:22.416 "nop_in_interval": 30, 00:18:22.416 "disable_chap": false, 00:18:22.416 "require_chap": false, 00:18:22.416 "mutual_chap": false, 00:18:22.416 "chap_group": 0, 00:18:22.416 "max_large_datain_per_connection": 64, 00:18:22.416 "max_r2t_per_connection": 4, 00:18:22.416 "pdu_pool_size": 36864, 00:18:22.416 "immediate_data_pool_size": 16384, 00:18:22.416 "data_out_pool_size": 2048 00:18:22.416 } 00:18:22.416 } 00:18:22.416 ] 00:18:22.416 } 00:18:22.416 ] 00:18:22.416 }' 00:18:22.416 [2024-11-27 04:35:18.942331] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:18:22.416 [2024-11-27 04:35:18.942449] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73603 ] 00:18:22.676 [2024-11-27 04:35:19.097180] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:22.676 [2024-11-27 04:35:19.184244] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:23.613 [2024-11-27 04:35:19.852748] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:18:23.613 [2024-11-27 04:35:19.853430] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:18:23.613 [2024-11-27 04:35:19.860843] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:18:23.613 [2024-11-27 04:35:19.860918] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:18:23.613 [2024-11-27 04:35:19.860927] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:18:23.613 [2024-11-27 04:35:19.860933] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:18:23.613 [2024-11-27 04:35:19.869802] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:18:23.613 [2024-11-27 04:35:19.869822] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:18:23.613 [2024-11-27 04:35:19.876749] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:18:23.613 [2024-11-27 04:35:19.876845] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:18:23.613 [2024-11-27 04:35:19.893742] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:18:23.613 04:35:19 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:23.613 04:35:19 ublk.test_save_ublk_config -- common/autotest_common.sh@868 -- # return 0 00:18:23.613 04:35:19 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # rpc_cmd ublk_get_disks 00:18:23.613 04:35:19 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # jq -r '.[0].ublk_device' 00:18:23.613 04:35:19 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.613 04:35:19 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:18:23.613 04:35:19 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.613 04:35:19 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # [[ /dev/ublkb0 == \/\d\e\v\/\u\b\l\k\b\0 ]] 00:18:23.613 04:35:19 ublk.test_save_ublk_config -- ublk/ublk.sh@123 -- # [[ -b /dev/ublkb0 ]] 00:18:23.613 04:35:19 ublk.test_save_ublk_config -- ublk/ublk.sh@125 -- # killprocess 73603 00:18:23.613 04:35:19 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # '[' -z 73603 ']' 00:18:23.613 04:35:19 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # kill -0 73603 00:18:23.613 04:35:19 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # uname 00:18:23.613 04:35:19 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:23.613 04:35:19 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73603 00:18:23.613 killing process with pid 73603 00:18:23.613 04:35:19 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:23.613 04:35:19 ublk.test_save_ublk_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:23.613 04:35:19 ublk.test_save_ublk_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73603' 00:18:23.613 04:35:19 ublk.test_save_ublk_config -- common/autotest_common.sh@973 -- # kill 73603 00:18:23.613 04:35:19 ublk.test_save_ublk_config -- common/autotest_common.sh@978 -- # wait 73603 00:18:24.549 [2024-11-27 04:35:21.001677] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:18:24.549 [2024-11-27 04:35:21.034818] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:18:24.549 [2024-11-27 04:35:21.034933] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:18:24.549 [2024-11-27 04:35:21.041750] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:18:24.549 [2024-11-27 04:35:21.041798] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:18:24.549 [2024-11-27 04:35:21.041805] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:18:24.549 [2024-11-27 04:35:21.041825] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:18:24.549 [2024-11-27 04:35:21.041942] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:18:25.921 04:35:22 ublk.test_save_ublk_config -- ublk/ublk.sh@126 -- # trap - EXIT 00:18:25.921 00:18:25.921 real 0m7.546s 00:18:25.921 user 0m5.049s 00:18:25.921 sys 0m3.126s 00:18:25.921 04:35:22 ublk.test_save_ublk_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:25.921 04:35:22 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:18:25.921 ************************************ 00:18:25.921 END TEST test_save_ublk_config 00:18:25.921 ************************************ 00:18:25.921 04:35:22 ublk -- ublk/ublk.sh@139 -- # spdk_pid=73679 00:18:25.921 04:35:22 ublk -- ublk/ublk.sh@140 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:25.921 04:35:22 ublk -- ublk/ublk.sh@141 -- # waitforlisten 73679 00:18:25.921 04:35:22 ublk -- common/autotest_common.sh@835 -- # '[' -z 73679 ']' 00:18:25.921 04:35:22 ublk -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:25.921 04:35:22 ublk -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:25.921 04:35:22 ublk -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:25.921 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:25.921 04:35:22 ublk -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:25.921 04:35:22 ublk -- common/autotest_common.sh@10 -- # set +x 00:18:25.921 04:35:22 ublk -- ublk/ublk.sh@138 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:18:25.921 [2024-11-27 04:35:22.351506] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:18:25.921 [2024-11-27 04:35:22.351603] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73679 ] 00:18:25.921 [2024-11-27 04:35:22.505378] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:18:26.179 [2024-11-27 04:35:22.610930] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:26.179 [2024-11-27 04:35:22.610932] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:26.744 04:35:23 ublk -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:26.744 04:35:23 ublk -- common/autotest_common.sh@868 -- # return 0 00:18:26.744 04:35:23 ublk -- ublk/ublk.sh@143 -- # run_test test_create_ublk test_create_ublk 00:18:26.744 04:35:23 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:26.744 04:35:23 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:26.744 04:35:23 ublk -- common/autotest_common.sh@10 -- # set +x 00:18:26.744 ************************************ 00:18:26.744 START TEST test_create_ublk 00:18:26.744 ************************************ 00:18:26.744 04:35:23 ublk.test_create_ublk -- common/autotest_common.sh@1129 -- # test_create_ublk 00:18:26.744 04:35:23 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # rpc_cmd ublk_create_target 00:18:26.744 04:35:23 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.744 04:35:23 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:26.744 [2024-11-27 04:35:23.230749] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:18:26.744 [2024-11-27 04:35:23.232627] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:18:26.744 04:35:23 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.744 04:35:23 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # ublk_target= 00:18:26.744 04:35:23 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # rpc_cmd bdev_malloc_create 128 4096 00:18:26.744 04:35:23 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.744 04:35:23 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:27.000 04:35:23 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.000 04:35:23 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # malloc_name=Malloc0 00:18:27.000 04:35:23 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:18:27.000 04:35:23 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.000 04:35:23 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:27.000 [2024-11-27 04:35:23.437902] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:18:27.000 [2024-11-27 04:35:23.438293] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:18:27.000 [2024-11-27 04:35:23.438309] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:18:27.000 [2024-11-27 04:35:23.438317] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:18:27.000 [2024-11-27 04:35:23.446936] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:18:27.000 [2024-11-27 04:35:23.446959] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:18:27.000 [2024-11-27 04:35:23.453759] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:18:27.000 [2024-11-27 04:35:23.454407] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:18:27.000 [2024-11-27 04:35:23.467827] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:18:27.000 04:35:23 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.000 04:35:23 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # ublk_id=0 00:18:27.000 04:35:23 ublk.test_create_ublk -- ublk/ublk.sh@38 -- # ublk_path=/dev/ublkb0 00:18:27.000 04:35:23 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # rpc_cmd ublk_get_disks -n 0 00:18:27.000 04:35:23 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.000 04:35:23 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:27.000 04:35:23 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.000 04:35:23 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # ublk_dev='[ 00:18:27.000 { 00:18:27.000 "ublk_device": "/dev/ublkb0", 00:18:27.000 "id": 0, 00:18:27.000 "queue_depth": 512, 00:18:27.000 "num_queues": 4, 00:18:27.000 "bdev_name": "Malloc0" 00:18:27.000 } 00:18:27.000 ]' 00:18:27.000 04:35:23 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # jq -r '.[0].ublk_device' 00:18:27.000 04:35:23 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:18:27.000 04:35:23 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # jq -r '.[0].id' 00:18:27.000 04:35:23 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # [[ 0 = \0 ]] 00:18:27.000 04:35:23 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # jq -r '.[0].queue_depth' 00:18:27.257 04:35:23 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # [[ 512 = \5\1\2 ]] 00:18:27.257 04:35:23 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # jq -r '.[0].num_queues' 00:18:27.257 04:35:23 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # [[ 4 = \4 ]] 00:18:27.257 04:35:23 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # jq -r '.[0].bdev_name' 00:18:27.257 04:35:23 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:18:27.257 04:35:23 ublk.test_create_ublk -- ublk/ublk.sh@48 -- # run_fio_test /dev/ublkb0 0 134217728 write 0xcc '--time_based --runtime=10' 00:18:27.257 04:35:23 ublk.test_create_ublk -- lvol/common.sh@40 -- # local file=/dev/ublkb0 00:18:27.257 04:35:23 ublk.test_create_ublk -- lvol/common.sh@41 -- # local offset=0 00:18:27.257 04:35:23 ublk.test_create_ublk -- lvol/common.sh@42 -- # local size=134217728 00:18:27.257 04:35:23 ublk.test_create_ublk -- lvol/common.sh@43 -- # local rw=write 00:18:27.257 04:35:23 ublk.test_create_ublk -- lvol/common.sh@44 -- # local pattern=0xcc 00:18:27.257 04:35:23 ublk.test_create_ublk -- lvol/common.sh@45 -- # local 'extra_params=--time_based --runtime=10' 00:18:27.257 04:35:23 ublk.test_create_ublk -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:18:27.257 04:35:23 ublk.test_create_ublk -- lvol/common.sh@48 -- # [[ -n 0xcc ]] 00:18:27.257 04:35:23 ublk.test_create_ublk -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:18:27.257 04:35:23 ublk.test_create_ublk -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:18:27.257 04:35:23 ublk.test_create_ublk -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0 00:18:27.257 fio: verification read phase will never start because write phase uses all of runtime 00:18:27.257 fio_test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:18:27.257 fio-3.35 00:18:27.257 Starting 1 process 00:18:39.450 00:18:39.450 fio_test: (groupid=0, jobs=1): err= 0: pid=73725: Wed Nov 27 04:35:33 2024 00:18:39.450 write: IOPS=18.5k, BW=72.4MiB/s (76.0MB/s)(724MiB/10001msec); 0 zone resets 00:18:39.450 clat (usec): min=35, max=4175, avg=53.08, stdev=83.75 00:18:39.450 lat (usec): min=36, max=4192, avg=53.55, stdev=83.77 00:18:39.450 clat percentiles (usec): 00:18:39.450 | 1.00th=[ 41], 5.00th=[ 43], 10.00th=[ 44], 20.00th=[ 45], 00:18:39.450 | 30.00th=[ 46], 40.00th=[ 48], 50.00th=[ 49], 60.00th=[ 50], 00:18:39.450 | 70.00th=[ 52], 80.00th=[ 55], 90.00th=[ 59], 95.00th=[ 63], 00:18:39.450 | 99.00th=[ 73], 99.50th=[ 79], 99.90th=[ 1500], 99.95th=[ 2474], 00:18:39.450 | 99.99th=[ 3490] 00:18:39.450 bw ( KiB/s): min=66448, max=79208, per=99.90%, avg=74098.95, stdev=4416.59, samples=19 00:18:39.450 iops : min=16612, max=19802, avg=18524.74, stdev=1104.15, samples=19 00:18:39.450 lat (usec) : 50=60.74%, 100=38.97%, 250=0.11%, 500=0.03%, 750=0.01% 00:18:39.450 lat (usec) : 1000=0.01% 00:18:39.450 lat (msec) : 2=0.05%, 4=0.07%, 10=0.01% 00:18:39.450 cpu : usr=3.84%, sys=15.68%, ctx=185456, majf=0, minf=796 00:18:39.450 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:39.450 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:39.450 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:39.450 issued rwts: total=0,185454,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:39.450 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:39.450 00:18:39.450 Run status group 0 (all jobs): 00:18:39.450 WRITE: bw=72.4MiB/s (76.0MB/s), 72.4MiB/s-72.4MiB/s (76.0MB/s-76.0MB/s), io=724MiB (760MB), run=10001-10001msec 00:18:39.450 00:18:39.450 Disk stats (read/write): 00:18:39.450 ublkb0: ios=0/183404, merge=0/0, ticks=0/8053, in_queue=8053, util=99.07% 00:18:39.450 04:35:33 ublk.test_create_ublk -- ublk/ublk.sh@51 -- # rpc_cmd ublk_stop_disk 0 00:18:39.450 04:35:33 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.450 04:35:33 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:39.450 [2024-11-27 04:35:33.884820] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:18:39.450 [2024-11-27 04:35:33.914201] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:18:39.450 [2024-11-27 04:35:33.915133] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:18:39.450 [2024-11-27 04:35:33.919744] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:18:39.450 [2024-11-27 04:35:33.919975] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:18:39.450 [2024-11-27 04:35:33.919987] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:18:39.450 04:35:33 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.450 04:35:33 ublk.test_create_ublk -- ublk/ublk.sh@53 -- # NOT rpc_cmd ublk_stop_disk 0 00:18:39.450 04:35:33 ublk.test_create_ublk -- common/autotest_common.sh@652 -- # local es=0 00:18:39.450 04:35:33 ublk.test_create_ublk -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd ublk_stop_disk 0 00:18:39.450 04:35:33 ublk.test_create_ublk -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:18:39.450 04:35:33 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:39.450 04:35:33 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:18:39.450 04:35:33 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:39.450 04:35:33 ublk.test_create_ublk -- common/autotest_common.sh@655 -- # rpc_cmd ublk_stop_disk 0 00:18:39.450 04:35:33 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.450 04:35:33 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:39.450 [2024-11-27 04:35:33.935809] ublk.c:1087:ublk_stop_disk: *ERROR*: no ublk dev with ublk_id=0 00:18:39.450 request: 00:18:39.450 { 00:18:39.450 "ublk_id": 0, 00:18:39.450 "method": "ublk_stop_disk", 00:18:39.450 "req_id": 1 00:18:39.450 } 00:18:39.450 Got JSON-RPC error response 00:18:39.450 response: 00:18:39.451 { 00:18:39.451 "code": -19, 00:18:39.451 "message": "No such device" 00:18:39.451 } 00:18:39.451 04:35:33 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:18:39.451 04:35:33 ublk.test_create_ublk -- common/autotest_common.sh@655 -- # es=1 00:18:39.451 04:35:33 ublk.test_create_ublk -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:39.451 04:35:33 ublk.test_create_ublk -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:39.451 04:35:33 ublk.test_create_ublk -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:39.451 04:35:33 ublk.test_create_ublk -- ublk/ublk.sh@54 -- # rpc_cmd ublk_destroy_target 00:18:39.451 04:35:33 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.451 04:35:33 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:39.451 [2024-11-27 04:35:33.951810] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:18:39.451 [2024-11-27 04:35:33.955474] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:18:39.451 [2024-11-27 04:35:33.955507] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:18:39.451 04:35:33 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.451 04:35:33 ublk.test_create_ublk -- ublk/ublk.sh@56 -- # rpc_cmd bdev_malloc_delete Malloc0 00:18:39.451 04:35:33 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.451 04:35:33 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:39.451 04:35:34 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.451 04:35:34 ublk.test_create_ublk -- ublk/ublk.sh@57 -- # check_leftover_devices 00:18:39.451 04:35:34 ublk.test_create_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:18:39.451 04:35:34 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.451 04:35:34 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:39.451 04:35:34 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.451 04:35:34 ublk.test_create_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:18:39.451 04:35:34 ublk.test_create_ublk -- lvol/common.sh@26 -- # jq length 00:18:39.451 04:35:34 ublk.test_create_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:18:39.451 04:35:34 ublk.test_create_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:18:39.451 04:35:34 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.451 04:35:34 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:39.451 04:35:34 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.451 04:35:34 ublk.test_create_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:18:39.451 04:35:34 ublk.test_create_ublk -- lvol/common.sh@28 -- # jq length 00:18:39.451 04:35:34 ublk.test_create_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:18:39.451 00:18:39.451 real 0m11.199s 00:18:39.451 user 0m0.689s 00:18:39.451 sys 0m1.644s 00:18:39.451 04:35:34 ublk.test_create_ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:39.451 04:35:34 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:39.451 ************************************ 00:18:39.451 END TEST test_create_ublk 00:18:39.451 ************************************ 00:18:39.451 04:35:34 ublk -- ublk/ublk.sh@144 -- # run_test test_create_multi_ublk test_create_multi_ublk 00:18:39.451 04:35:34 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:39.451 04:35:34 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:39.451 04:35:34 ublk -- common/autotest_common.sh@10 -- # set +x 00:18:39.451 ************************************ 00:18:39.451 START TEST test_create_multi_ublk 00:18:39.451 ************************************ 00:18:39.451 04:35:34 ublk.test_create_multi_ublk -- common/autotest_common.sh@1129 -- # test_create_multi_ublk 00:18:39.451 04:35:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # rpc_cmd ublk_create_target 00:18:39.451 04:35:34 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.451 04:35:34 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:39.451 [2024-11-27 04:35:34.466742] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:18:39.451 [2024-11-27 04:35:34.468357] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:18:39.451 04:35:34 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.451 04:35:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # ublk_target= 00:18:39.451 04:35:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # seq 0 3 00:18:39.451 04:35:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:18:39.451 04:35:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc0 128 4096 00:18:39.451 04:35:34 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.451 04:35:34 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:39.451 04:35:34 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.451 04:35:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc0 00:18:39.451 04:35:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:18:39.451 04:35:34 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.451 04:35:34 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:39.451 [2024-11-27 04:35:34.682858] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:18:39.451 [2024-11-27 04:35:34.683165] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:18:39.451 [2024-11-27 04:35:34.683177] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:18:39.451 [2024-11-27 04:35:34.683186] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:18:39.451 [2024-11-27 04:35:34.693787] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:18:39.451 [2024-11-27 04:35:34.693810] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:18:39.451 [2024-11-27 04:35:34.705748] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:18:39.451 [2024-11-27 04:35:34.706281] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:18:39.451 [2024-11-27 04:35:34.735752] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:18:39.451 04:35:34 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.451 04:35:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=0 00:18:39.451 04:35:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:18:39.451 04:35:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc1 128 4096 00:18:39.451 04:35:34 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.451 04:35:34 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:39.451 04:35:34 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.451 04:35:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc1 00:18:39.451 04:35:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc1 1 -q 4 -d 512 00:18:39.451 04:35:34 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.451 04:35:34 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:39.451 [2024-11-27 04:35:34.983852] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev Malloc1 num_queues 4 queue_depth 512 00:18:39.451 [2024-11-27 04:35:34.984153] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc1 via ublk 1 00:18:39.451 [2024-11-27 04:35:34.984166] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:18:39.451 [2024-11-27 04:35:34.984171] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:18:39.451 [2024-11-27 04:35:34.999765] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:18:39.451 [2024-11-27 04:35:34.999783] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:18:39.451 [2024-11-27 04:35:35.019743] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:18:39.451 [2024-11-27 04:35:35.020263] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:18:39.451 [2024-11-27 04:35:35.048756] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:18:39.451 04:35:35 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.451 04:35:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=1 00:18:39.451 04:35:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:18:39.451 04:35:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc2 128 4096 00:18:39.451 04:35:35 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.451 04:35:35 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:39.451 04:35:35 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.451 04:35:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc2 00:18:39.451 04:35:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc2 2 -q 4 -d 512 00:18:39.451 04:35:35 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.451 04:35:35 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:39.451 [2024-11-27 04:35:35.254836] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk2: bdev Malloc2 num_queues 4 queue_depth 512 00:18:39.451 [2024-11-27 04:35:35.255150] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc2 via ublk 2 00:18:39.451 [2024-11-27 04:35:35.255163] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk2: add to tailq 00:18:39.451 [2024-11-27 04:35:35.255170] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV 00:18:39.451 [2024-11-27 04:35:35.262758] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV completed 00:18:39.451 [2024-11-27 04:35:35.262779] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS 00:18:39.451 [2024-11-27 04:35:35.270745] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:18:39.451 [2024-11-27 04:35:35.271275] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV 00:18:39.451 [2024-11-27 04:35:35.279743] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV completed 00:18:39.451 04:35:35 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.451 04:35:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=2 00:18:39.451 04:35:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:18:39.451 04:35:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc3 128 4096 00:18:39.451 04:35:35 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.451 04:35:35 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:39.451 04:35:35 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.451 04:35:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc3 00:18:39.451 04:35:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc3 3 -q 4 -d 512 00:18:39.452 04:35:35 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.452 04:35:35 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:39.452 [2024-11-27 04:35:35.438851] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk3: bdev Malloc3 num_queues 4 queue_depth 512 00:18:39.452 [2024-11-27 04:35:35.439152] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc3 via ublk 3 00:18:39.452 [2024-11-27 04:35:35.439165] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk3: add to tailq 00:18:39.452 [2024-11-27 04:35:35.439170] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV 00:18:39.452 [2024-11-27 04:35:35.446757] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV completed 00:18:39.452 [2024-11-27 04:35:35.446775] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS 00:18:39.452 [2024-11-27 04:35:35.454747] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:18:39.452 [2024-11-27 04:35:35.455286] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV 00:18:39.452 [2024-11-27 04:35:35.475759] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV completed 00:18:39.452 04:35:35 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.452 04:35:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=3 00:18:39.452 04:35:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # rpc_cmd ublk_get_disks 00:18:39.452 04:35:35 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.452 04:35:35 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:39.452 04:35:35 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.452 04:35:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # ublk_dev='[ 00:18:39.452 { 00:18:39.452 "ublk_device": "/dev/ublkb0", 00:18:39.452 "id": 0, 00:18:39.452 "queue_depth": 512, 00:18:39.452 "num_queues": 4, 00:18:39.452 "bdev_name": "Malloc0" 00:18:39.452 }, 00:18:39.452 { 00:18:39.452 "ublk_device": "/dev/ublkb1", 00:18:39.452 "id": 1, 00:18:39.452 "queue_depth": 512, 00:18:39.452 "num_queues": 4, 00:18:39.452 "bdev_name": "Malloc1" 00:18:39.452 }, 00:18:39.452 { 00:18:39.452 "ublk_device": "/dev/ublkb2", 00:18:39.452 "id": 2, 00:18:39.452 "queue_depth": 512, 00:18:39.452 "num_queues": 4, 00:18:39.452 "bdev_name": "Malloc2" 00:18:39.452 }, 00:18:39.452 { 00:18:39.452 "ublk_device": "/dev/ublkb3", 00:18:39.452 "id": 3, 00:18:39.452 "queue_depth": 512, 00:18:39.452 "num_queues": 4, 00:18:39.452 "bdev_name": "Malloc3" 00:18:39.452 } 00:18:39.452 ]' 00:18:39.452 04:35:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # seq 0 3 00:18:39.452 04:35:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:18:39.452 04:35:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[0].ublk_device' 00:18:39.452 04:35:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:18:39.452 04:35:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[0].id' 00:18:39.452 04:35:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 0 = \0 ]] 00:18:39.452 04:35:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[0].queue_depth' 00:18:39.452 04:35:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:18:39.452 04:35:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[0].num_queues' 00:18:39.452 04:35:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:18:39.452 04:35:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[0].bdev_name' 00:18:39.452 04:35:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:18:39.452 04:35:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:18:39.452 04:35:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[1].ublk_device' 00:18:39.452 04:35:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb1 = \/\d\e\v\/\u\b\l\k\b\1 ]] 00:18:39.452 04:35:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[1].id' 00:18:39.452 04:35:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 1 = \1 ]] 00:18:39.452 04:35:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[1].queue_depth' 00:18:39.452 04:35:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:18:39.452 04:35:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[1].num_queues' 00:18:39.452 04:35:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:18:39.452 04:35:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[1].bdev_name' 00:18:39.452 04:35:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc1 = \M\a\l\l\o\c\1 ]] 00:18:39.452 04:35:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:18:39.452 04:35:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[2].ublk_device' 00:18:39.452 04:35:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb2 = \/\d\e\v\/\u\b\l\k\b\2 ]] 00:18:39.452 04:35:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[2].id' 00:18:39.452 04:35:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 2 = \2 ]] 00:18:39.452 04:35:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[2].queue_depth' 00:18:39.452 04:35:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:18:39.452 04:35:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[2].num_queues' 00:18:39.452 04:35:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:18:39.452 04:35:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[2].bdev_name' 00:18:39.452 04:35:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc2 = \M\a\l\l\o\c\2 ]] 00:18:39.452 04:35:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:18:39.452 04:35:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[3].ublk_device' 00:18:39.452 04:35:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb3 = \/\d\e\v\/\u\b\l\k\b\3 ]] 00:18:39.452 04:35:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[3].id' 00:18:39.452 04:35:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 3 = \3 ]] 00:18:39.452 04:35:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[3].queue_depth' 00:18:39.710 04:35:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:18:39.710 04:35:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[3].num_queues' 00:18:39.710 04:35:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:18:39.710 04:35:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[3].bdev_name' 00:18:39.710 04:35:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc3 = \M\a\l\l\o\c\3 ]] 00:18:39.710 04:35:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@84 -- # [[ 1 = \1 ]] 00:18:39.710 04:35:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # seq 0 3 00:18:39.710 04:35:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:18:39.710 04:35:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 0 00:18:39.710 04:35:36 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.710 04:35:36 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:39.710 [2024-11-27 04:35:36.166840] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:18:39.710 [2024-11-27 04:35:36.200234] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:18:39.710 [2024-11-27 04:35:36.201243] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:18:39.710 [2024-11-27 04:35:36.206755] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:18:39.710 [2024-11-27 04:35:36.206998] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:18:39.710 [2024-11-27 04:35:36.207011] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:18:39.710 04:35:36 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.710 04:35:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:18:39.710 04:35:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 1 00:18:39.710 04:35:36 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.710 04:35:36 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:39.710 [2024-11-27 04:35:36.222816] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:18:39.710 [2024-11-27 04:35:36.251137] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:18:39.710 [2024-11-27 04:35:36.252170] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:18:39.710 [2024-11-27 04:35:36.262755] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:18:39.710 [2024-11-27 04:35:36.263015] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:18:39.710 [2024-11-27 04:35:36.263028] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:18:39.710 04:35:36 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.710 04:35:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:18:39.710 04:35:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 2 00:18:39.710 04:35:36 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.710 04:35:36 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:39.710 [2024-11-27 04:35:36.276861] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV 00:18:39.968 [2024-11-27 04:35:36.322234] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV completed 00:18:39.968 [2024-11-27 04:35:36.323150] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV 00:18:39.968 [2024-11-27 04:35:36.329752] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV completed 00:18:39.968 [2024-11-27 04:35:36.329995] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk2: remove from tailq 00:18:39.968 [2024-11-27 04:35:36.330003] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 2 stopped 00:18:39.968 04:35:36 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.968 04:35:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:18:39.968 04:35:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 3 00:18:39.968 04:35:36 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.968 04:35:36 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:39.968 [2024-11-27 04:35:36.344855] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV 00:18:39.968 [2024-11-27 04:35:36.379775] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV completed 00:18:39.968 [2024-11-27 04:35:36.380422] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV 00:18:39.968 [2024-11-27 04:35:36.384854] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV completed 00:18:39.968 [2024-11-27 04:35:36.385183] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk3: remove from tailq 00:18:39.968 [2024-11-27 04:35:36.385196] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 3 stopped 00:18:39.968 04:35:36 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.968 04:35:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 ublk_destroy_target 00:18:40.318 [2024-11-27 04:35:36.656812] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:18:40.318 [2024-11-27 04:35:36.660507] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:18:40.318 [2024-11-27 04:35:36.660544] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:18:40.318 04:35:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # seq 0 3 00:18:40.318 04:35:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:18:40.318 04:35:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc0 00:18:40.318 04:35:36 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.318 04:35:36 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:40.575 04:35:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.575 04:35:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:18:40.575 04:35:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc1 00:18:40.575 04:35:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.575 04:35:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:40.833 04:35:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.833 04:35:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:18:40.833 04:35:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc2 00:18:40.833 04:35:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.833 04:35:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:41.399 04:35:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.399 04:35:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:18:41.399 04:35:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc3 00:18:41.399 04:35:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.399 04:35:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:41.657 04:35:38 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.657 04:35:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@96 -- # check_leftover_devices 00:18:41.657 04:35:38 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:18:41.657 04:35:38 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.657 04:35:38 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:41.657 04:35:38 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.657 04:35:38 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:18:41.657 04:35:38 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # jq length 00:18:41.657 04:35:38 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:18:41.657 04:35:38 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:18:41.657 04:35:38 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.657 04:35:38 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:41.657 04:35:38 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.657 04:35:38 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:18:41.657 04:35:38 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # jq length 00:18:41.915 ************************************ 00:18:41.915 END TEST test_create_multi_ublk 00:18:41.915 ************************************ 00:18:41.915 04:35:38 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:18:41.915 00:18:41.915 real 0m3.786s 00:18:41.915 user 0m0.917s 00:18:41.915 sys 0m0.148s 00:18:41.915 04:35:38 ublk.test_create_multi_ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:41.915 04:35:38 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:41.915 04:35:38 ublk -- ublk/ublk.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:18:41.915 04:35:38 ublk -- ublk/ublk.sh@147 -- # cleanup 00:18:41.915 04:35:38 ublk -- ublk/ublk.sh@130 -- # killprocess 73679 00:18:41.915 04:35:38 ublk -- common/autotest_common.sh@954 -- # '[' -z 73679 ']' 00:18:41.915 04:35:38 ublk -- common/autotest_common.sh@958 -- # kill -0 73679 00:18:41.915 04:35:38 ublk -- common/autotest_common.sh@959 -- # uname 00:18:41.915 04:35:38 ublk -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:41.915 04:35:38 ublk -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73679 00:18:41.915 killing process with pid 73679 00:18:41.915 04:35:38 ublk -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:41.915 04:35:38 ublk -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:41.915 04:35:38 ublk -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73679' 00:18:41.915 04:35:38 ublk -- common/autotest_common.sh@973 -- # kill 73679 00:18:41.915 04:35:38 ublk -- common/autotest_common.sh@978 -- # wait 73679 00:18:43.315 [2024-11-27 04:35:39.435486] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:18:43.315 [2024-11-27 04:35:39.435539] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:18:43.882 00:18:43.882 real 0m25.626s 00:18:43.882 user 0m36.291s 00:18:43.882 sys 0m10.944s 00:18:43.882 04:35:40 ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:43.882 04:35:40 ublk -- common/autotest_common.sh@10 -- # set +x 00:18:43.882 ************************************ 00:18:43.882 END TEST ublk 00:18:43.882 ************************************ 00:18:43.882 04:35:40 -- spdk/autotest.sh@248 -- # run_test ublk_recovery /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:18:43.882 04:35:40 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:43.882 04:35:40 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:43.882 04:35:40 -- common/autotest_common.sh@10 -- # set +x 00:18:43.882 ************************************ 00:18:43.882 START TEST ublk_recovery 00:18:43.882 ************************************ 00:18:43.882 04:35:40 ublk_recovery -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:18:43.882 * Looking for test storage... 00:18:43.882 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:18:43.882 04:35:40 ublk_recovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:43.882 04:35:40 ublk_recovery -- common/autotest_common.sh@1693 -- # lcov --version 00:18:43.882 04:35:40 ublk_recovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:43.882 04:35:40 ublk_recovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:43.882 04:35:40 ublk_recovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:43.882 04:35:40 ublk_recovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:43.882 04:35:40 ublk_recovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:43.882 04:35:40 ublk_recovery -- scripts/common.sh@336 -- # IFS=.-: 00:18:43.882 04:35:40 ublk_recovery -- scripts/common.sh@336 -- # read -ra ver1 00:18:43.882 04:35:40 ublk_recovery -- scripts/common.sh@337 -- # IFS=.-: 00:18:43.882 04:35:40 ublk_recovery -- scripts/common.sh@337 -- # read -ra ver2 00:18:43.882 04:35:40 ublk_recovery -- scripts/common.sh@338 -- # local 'op=<' 00:18:43.882 04:35:40 ublk_recovery -- scripts/common.sh@340 -- # ver1_l=2 00:18:43.882 04:35:40 ublk_recovery -- scripts/common.sh@341 -- # ver2_l=1 00:18:43.882 04:35:40 ublk_recovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:43.882 04:35:40 ublk_recovery -- scripts/common.sh@344 -- # case "$op" in 00:18:43.882 04:35:40 ublk_recovery -- scripts/common.sh@345 -- # : 1 00:18:43.882 04:35:40 ublk_recovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:43.882 04:35:40 ublk_recovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:43.882 04:35:40 ublk_recovery -- scripts/common.sh@365 -- # decimal 1 00:18:43.882 04:35:40 ublk_recovery -- scripts/common.sh@353 -- # local d=1 00:18:43.882 04:35:40 ublk_recovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:43.882 04:35:40 ublk_recovery -- scripts/common.sh@355 -- # echo 1 00:18:43.882 04:35:40 ublk_recovery -- scripts/common.sh@365 -- # ver1[v]=1 00:18:43.882 04:35:40 ublk_recovery -- scripts/common.sh@366 -- # decimal 2 00:18:43.882 04:35:40 ublk_recovery -- scripts/common.sh@353 -- # local d=2 00:18:43.882 04:35:40 ublk_recovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:43.882 04:35:40 ublk_recovery -- scripts/common.sh@355 -- # echo 2 00:18:43.882 04:35:40 ublk_recovery -- scripts/common.sh@366 -- # ver2[v]=2 00:18:43.882 04:35:40 ublk_recovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:43.882 04:35:40 ublk_recovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:43.882 04:35:40 ublk_recovery -- scripts/common.sh@368 -- # return 0 00:18:43.882 04:35:40 ublk_recovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:43.882 04:35:40 ublk_recovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:43.883 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:43.883 --rc genhtml_branch_coverage=1 00:18:43.883 --rc genhtml_function_coverage=1 00:18:43.883 --rc genhtml_legend=1 00:18:43.883 --rc geninfo_all_blocks=1 00:18:43.883 --rc geninfo_unexecuted_blocks=1 00:18:43.883 00:18:43.883 ' 00:18:43.883 04:35:40 ublk_recovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:43.883 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:43.883 --rc genhtml_branch_coverage=1 00:18:43.883 --rc genhtml_function_coverage=1 00:18:43.883 --rc genhtml_legend=1 00:18:43.883 --rc geninfo_all_blocks=1 00:18:43.883 --rc geninfo_unexecuted_blocks=1 00:18:43.883 00:18:43.883 ' 00:18:43.883 04:35:40 ublk_recovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:43.883 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:43.883 --rc genhtml_branch_coverage=1 00:18:43.883 --rc genhtml_function_coverage=1 00:18:43.883 --rc genhtml_legend=1 00:18:43.883 --rc geninfo_all_blocks=1 00:18:43.883 --rc geninfo_unexecuted_blocks=1 00:18:43.883 00:18:43.883 ' 00:18:43.883 04:35:40 ublk_recovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:43.883 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:43.883 --rc genhtml_branch_coverage=1 00:18:43.883 --rc genhtml_function_coverage=1 00:18:43.883 --rc genhtml_legend=1 00:18:43.883 --rc geninfo_all_blocks=1 00:18:43.883 --rc geninfo_unexecuted_blocks=1 00:18:43.883 00:18:43.883 ' 00:18:43.883 04:35:40 ublk_recovery -- ublk/ublk_recovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:18:43.883 04:35:40 ublk_recovery -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:18:43.883 04:35:40 ublk_recovery -- lvol/common.sh@7 -- # MALLOC_BS=512 00:18:43.883 04:35:40 ublk_recovery -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:18:43.883 04:35:40 ublk_recovery -- lvol/common.sh@9 -- # AIO_BS=4096 00:18:43.883 04:35:40 ublk_recovery -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:18:43.883 04:35:40 ublk_recovery -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:18:43.883 04:35:40 ublk_recovery -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:18:43.883 04:35:40 ublk_recovery -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:18:43.883 04:35:40 ublk_recovery -- ublk/ublk_recovery.sh@11 -- # modprobe ublk_drv 00:18:43.883 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:43.883 04:35:40 ublk_recovery -- ublk/ublk_recovery.sh@19 -- # spdk_pid=74081 00:18:43.883 04:35:40 ublk_recovery -- ublk/ublk_recovery.sh@20 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:43.883 04:35:40 ublk_recovery -- ublk/ublk_recovery.sh@21 -- # waitforlisten 74081 00:18:43.883 04:35:40 ublk_recovery -- common/autotest_common.sh@835 -- # '[' -z 74081 ']' 00:18:43.883 04:35:40 ublk_recovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:43.883 04:35:40 ublk_recovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:43.883 04:35:40 ublk_recovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:43.883 04:35:40 ublk_recovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:43.883 04:35:40 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:18:43.883 04:35:40 ublk_recovery -- ublk/ublk_recovery.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:18:43.883 [2024-11-27 04:35:40.424999] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:18:43.883 [2024-11-27 04:35:40.425118] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74081 ] 00:18:44.141 [2024-11-27 04:35:40.582848] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:18:44.141 [2024-11-27 04:35:40.686183] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:44.141 [2024-11-27 04:35:40.686381] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:44.707 04:35:41 ublk_recovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:44.707 04:35:41 ublk_recovery -- common/autotest_common.sh@868 -- # return 0 00:18:44.707 04:35:41 ublk_recovery -- ublk/ublk_recovery.sh@23 -- # rpc_cmd ublk_create_target 00:18:44.707 04:35:41 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.707 04:35:41 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:18:44.707 [2024-11-27 04:35:41.286745] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:18:44.707 [2024-11-27 04:35:41.288602] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:18:44.707 04:35:41 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.707 04:35:41 ublk_recovery -- ublk/ublk_recovery.sh@24 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:18:44.707 04:35:41 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.707 04:35:41 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:18:44.965 malloc0 00:18:44.965 04:35:41 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.965 04:35:41 ublk_recovery -- ublk/ublk_recovery.sh@25 -- # rpc_cmd ublk_start_disk malloc0 1 -q 2 -d 128 00:18:44.965 04:35:41 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.965 04:35:41 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:18:44.965 [2024-11-27 04:35:41.390877] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev malloc0 num_queues 2 queue_depth 128 00:18:44.965 [2024-11-27 04:35:41.390994] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 1 00:18:44.965 [2024-11-27 04:35:41.391005] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:18:44.965 [2024-11-27 04:35:41.391015] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:18:44.965 [2024-11-27 04:35:41.399843] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:18:44.965 [2024-11-27 04:35:41.399865] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:18:44.965 [2024-11-27 04:35:41.406751] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:18:44.965 [2024-11-27 04:35:41.406893] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:18:44.966 [2024-11-27 04:35:41.423748] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:18:44.966 1 00:18:44.966 04:35:41 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.966 04:35:41 ublk_recovery -- ublk/ublk_recovery.sh@27 -- # sleep 1 00:18:45.899 04:35:42 ublk_recovery -- ublk/ublk_recovery.sh@31 -- # fio_proc=74116 00:18:45.899 04:35:42 ublk_recovery -- ublk/ublk_recovery.sh@33 -- # sleep 5 00:18:45.899 04:35:42 ublk_recovery -- ublk/ublk_recovery.sh@30 -- # taskset -c 2-3 fio --name=fio_test --filename=/dev/ublkb1 --numjobs=1 --iodepth=128 --ioengine=libaio --rw=randrw --direct=1 --time_based --runtime=60 00:18:46.157 fio_test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:46.157 fio-3.35 00:18:46.157 Starting 1 process 00:18:51.424 04:35:47 ublk_recovery -- ublk/ublk_recovery.sh@36 -- # kill -9 74081 00:18:51.424 04:35:47 ublk_recovery -- ublk/ublk_recovery.sh@38 -- # sleep 5 00:18:56.748 /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh: line 38: 74081 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x3 -L ublk 00:18:56.748 04:35:52 ublk_recovery -- ublk/ublk_recovery.sh@42 -- # spdk_pid=74227 00:18:56.748 04:35:52 ublk_recovery -- ublk/ublk_recovery.sh@43 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:56.748 04:35:52 ublk_recovery -- ublk/ublk_recovery.sh@44 -- # waitforlisten 74227 00:18:56.748 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:56.748 04:35:52 ublk_recovery -- common/autotest_common.sh@835 -- # '[' -z 74227 ']' 00:18:56.748 04:35:52 ublk_recovery -- ublk/ublk_recovery.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:18:56.748 04:35:52 ublk_recovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:56.748 04:35:52 ublk_recovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:56.748 04:35:52 ublk_recovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:56.748 04:35:52 ublk_recovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:56.748 04:35:52 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:18:56.748 [2024-11-27 04:35:52.514090] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:18:56.748 [2024-11-27 04:35:52.514199] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74227 ] 00:18:56.748 [2024-11-27 04:35:52.668944] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:18:56.748 [2024-11-27 04:35:52.774411] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:56.748 [2024-11-27 04:35:52.774577] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:57.006 04:35:53 ublk_recovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:57.006 04:35:53 ublk_recovery -- common/autotest_common.sh@868 -- # return 0 00:18:57.006 04:35:53 ublk_recovery -- ublk/ublk_recovery.sh@47 -- # rpc_cmd ublk_create_target 00:18:57.006 04:35:53 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.006 04:35:53 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:18:57.006 [2024-11-27 04:35:53.391743] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:18:57.006 [2024-11-27 04:35:53.393835] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:18:57.006 04:35:53 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.006 04:35:53 ublk_recovery -- ublk/ublk_recovery.sh@48 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:18:57.006 04:35:53 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.006 04:35:53 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:18:57.006 malloc0 00:18:57.006 04:35:53 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.006 04:35:53 ublk_recovery -- ublk/ublk_recovery.sh@49 -- # rpc_cmd ublk_recover_disk malloc0 1 00:18:57.006 04:35:53 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.006 04:35:53 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:18:57.006 [2024-11-27 04:35:53.503914] ublk.c:2106:ublk_start_disk_recovery: *NOTICE*: Recovering ublk 1 with bdev malloc0 00:18:57.006 [2024-11-27 04:35:53.503965] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:18:57.006 [2024-11-27 04:35:53.503976] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:18:57.006 [2024-11-27 04:35:53.511784] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:18:57.006 [2024-11-27 04:35:53.511819] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:18:57.006 1 00:18:57.006 04:35:53 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.006 04:35:53 ublk_recovery -- ublk/ublk_recovery.sh@52 -- # wait 74116 00:18:57.941 [2024-11-27 04:35:54.511852] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:18:57.941 [2024-11-27 04:35:54.518774] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:18:57.941 [2024-11-27 04:35:54.518802] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:18:59.316 [2024-11-27 04:35:55.518842] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:18:59.316 [2024-11-27 04:35:55.523763] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:18:59.316 [2024-11-27 04:35:55.523786] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:19:00.252 [2024-11-27 04:35:56.523813] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:19:00.252 [2024-11-27 04:35:56.532760] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:19:00.252 [2024-11-27 04:35:56.532883] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:19:00.252 [2024-11-27 04:35:56.532908] ublk.c:2035:ublk_ctrl_start_recovery: *DEBUG*: Recovering ublk 1, num queues 2, queue depth 128, flags 0xda 00:19:00.252 [2024-11-27 04:35:56.533030] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY 00:19:22.238 [2024-11-27 04:36:17.909757] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY completed 00:19:22.238 [2024-11-27 04:36:17.913277] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY 00:19:22.238 [2024-11-27 04:36:17.924982] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY completed 00:19:22.238 [2024-11-27 04:36:17.925079] ublk.c: 413:ublk_ctrl_process_cqe: *NOTICE*: Ublk 1 recover done successfully 00:19:48.802 00:19:48.802 fio_test: (groupid=0, jobs=1): err= 0: pid=74119: Wed Nov 27 04:36:42 2024 00:19:48.802 read: IOPS=14.2k, BW=55.4MiB/s (58.1MB/s)(3323MiB/60002msec) 00:19:48.802 slat (nsec): min=857, max=355605, avg=5090.26, stdev=1789.92 00:19:48.802 clat (usec): min=983, max=30497k, avg=4471.97, stdev=264481.96 00:19:48.802 lat (usec): min=992, max=30497k, avg=4477.06, stdev=264481.96 00:19:48.802 clat percentiles (usec): 00:19:48.802 | 1.00th=[ 1696], 5.00th=[ 1844], 10.00th=[ 1876], 20.00th=[ 1909], 00:19:48.802 | 30.00th=[ 1926], 40.00th=[ 1958], 50.00th=[ 1975], 60.00th=[ 1991], 00:19:48.802 | 70.00th=[ 2040], 80.00th=[ 2343], 90.00th=[ 2474], 95.00th=[ 3261], 00:19:48.802 | 99.00th=[ 5342], 99.50th=[ 5866], 99.90th=[ 7701], 99.95th=[ 8717], 00:19:48.802 | 99.99th=[13566] 00:19:48.802 bw ( KiB/s): min=28272, max=125568, per=100.00%, avg=113491.80, stdev=18135.08, samples=59 00:19:48.802 iops : min= 7068, max=31392, avg=28372.95, stdev=4533.77, samples=59 00:19:48.802 write: IOPS=14.2k, BW=55.3MiB/s (58.0MB/s)(3319MiB/60002msec); 0 zone resets 00:19:48.802 slat (nsec): min=971, max=301929, avg=5135.30, stdev=1774.30 00:19:48.802 clat (usec): min=964, max=30497k, avg=4550.55, stdev=264659.67 00:19:48.802 lat (usec): min=972, max=30497k, avg=4555.69, stdev=264659.67 00:19:48.802 clat percentiles (usec): 00:19:48.802 | 1.00th=[ 1729], 5.00th=[ 1926], 10.00th=[ 1958], 20.00th=[ 1991], 00:19:48.802 | 30.00th=[ 2024], 40.00th=[ 2040], 50.00th=[ 2057], 60.00th=[ 2089], 00:19:48.802 | 70.00th=[ 2147], 80.00th=[ 2442], 90.00th=[ 2540], 95.00th=[ 3195], 00:19:48.802 | 99.00th=[ 5407], 99.50th=[ 5932], 99.90th=[ 7701], 99.95th=[ 8848], 00:19:48.802 | 99.99th=[13698] 00:19:48.802 bw ( KiB/s): min=28688, max=124216, per=100.00%, avg=113348.75, stdev=17934.33, samples=59 00:19:48.802 iops : min= 7172, max=31054, avg=28337.19, stdev=4483.58, samples=59 00:19:48.802 lat (usec) : 1000=0.01% 00:19:48.802 lat (msec) : 2=41.70%, 4=55.11%, 10=3.14%, 20=0.04%, >=2000=0.01% 00:19:48.802 cpu : usr=3.37%, sys=14.91%, ctx=59530, majf=0, minf=13 00:19:48.802 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:19:48.802 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:48.802 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:48.802 issued rwts: total=850732,849591,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:48.802 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:48.802 00:19:48.802 Run status group 0 (all jobs): 00:19:48.802 READ: bw=55.4MiB/s (58.1MB/s), 55.4MiB/s-55.4MiB/s (58.1MB/s-58.1MB/s), io=3323MiB (3485MB), run=60002-60002msec 00:19:48.802 WRITE: bw=55.3MiB/s (58.0MB/s), 55.3MiB/s-55.3MiB/s (58.0MB/s-58.0MB/s), io=3319MiB (3480MB), run=60002-60002msec 00:19:48.802 00:19:48.802 Disk stats (read/write): 00:19:48.802 ublkb1: ios=847398/846318, merge=0/0, ticks=3750066/3742992, in_queue=7493059, util=99.92% 00:19:48.802 04:36:42 ublk_recovery -- ublk/ublk_recovery.sh@55 -- # rpc_cmd ublk_stop_disk 1 00:19:48.802 04:36:42 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.802 04:36:42 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:19:48.802 [2024-11-27 04:36:42.691802] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:19:48.802 [2024-11-27 04:36:42.727768] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:19:48.802 [2024-11-27 04:36:42.727919] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:19:48.802 [2024-11-27 04:36:42.739767] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:19:48.802 [2024-11-27 04:36:42.743825] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:19:48.802 [2024-11-27 04:36:42.743838] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:19:48.802 04:36:42 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.802 04:36:42 ublk_recovery -- ublk/ublk_recovery.sh@56 -- # rpc_cmd ublk_destroy_target 00:19:48.802 04:36:42 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.802 04:36:42 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:19:48.802 [2024-11-27 04:36:42.747885] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:19:48.802 [2024-11-27 04:36:42.754735] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:19:48.802 [2024-11-27 04:36:42.754770] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:19:48.802 04:36:42 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.802 04:36:42 ublk_recovery -- ublk/ublk_recovery.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:19:48.802 04:36:42 ublk_recovery -- ublk/ublk_recovery.sh@59 -- # cleanup 00:19:48.802 04:36:42 ublk_recovery -- ublk/ublk_recovery.sh@14 -- # killprocess 74227 00:19:48.802 04:36:42 ublk_recovery -- common/autotest_common.sh@954 -- # '[' -z 74227 ']' 00:19:48.802 04:36:42 ublk_recovery -- common/autotest_common.sh@958 -- # kill -0 74227 00:19:48.802 04:36:42 ublk_recovery -- common/autotest_common.sh@959 -- # uname 00:19:48.802 04:36:42 ublk_recovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:48.802 04:36:42 ublk_recovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74227 00:19:48.802 killing process with pid 74227 00:19:48.802 04:36:42 ublk_recovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:48.802 04:36:42 ublk_recovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:48.802 04:36:42 ublk_recovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74227' 00:19:48.802 04:36:42 ublk_recovery -- common/autotest_common.sh@973 -- # kill 74227 00:19:48.802 04:36:42 ublk_recovery -- common/autotest_common.sh@978 -- # wait 74227 00:19:48.802 [2024-11-27 04:36:43.841616] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:19:48.802 [2024-11-27 04:36:43.841671] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:19:48.802 00:19:48.802 real 1m4.366s 00:19:48.802 user 1m46.691s 00:19:48.802 sys 0m22.376s 00:19:48.803 04:36:44 ublk_recovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:48.803 ************************************ 00:19:48.803 END TEST ublk_recovery 00:19:48.803 ************************************ 00:19:48.803 04:36:44 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:19:48.803 04:36:44 -- spdk/autotest.sh@251 -- # [[ 0 -eq 1 ]] 00:19:48.803 04:36:44 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:19:48.803 04:36:44 -- spdk/autotest.sh@260 -- # timing_exit lib 00:19:48.803 04:36:44 -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:48.803 04:36:44 -- common/autotest_common.sh@10 -- # set +x 00:19:48.803 04:36:44 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:19:48.803 04:36:44 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:19:48.803 04:36:44 -- spdk/autotest.sh@276 -- # '[' 0 -eq 1 ']' 00:19:48.803 04:36:44 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:19:48.803 04:36:44 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:19:48.803 04:36:44 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:19:48.803 04:36:44 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:19:48.803 04:36:44 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:19:48.803 04:36:44 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:19:48.803 04:36:44 -- spdk/autotest.sh@342 -- # '[' 1 -eq 1 ']' 00:19:48.803 04:36:44 -- spdk/autotest.sh@343 -- # run_test ftl /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:19:48.803 04:36:44 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:48.803 04:36:44 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:48.803 04:36:44 -- common/autotest_common.sh@10 -- # set +x 00:19:48.803 ************************************ 00:19:48.803 START TEST ftl 00:19:48.803 ************************************ 00:19:48.803 04:36:44 ftl -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:19:48.803 * Looking for test storage... 00:19:48.803 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:19:48.803 04:36:44 ftl -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:48.803 04:36:44 ftl -- common/autotest_common.sh@1693 -- # lcov --version 00:19:48.803 04:36:44 ftl -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:48.803 04:36:44 ftl -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:48.803 04:36:44 ftl -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:48.803 04:36:44 ftl -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:48.803 04:36:44 ftl -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:48.803 04:36:44 ftl -- scripts/common.sh@336 -- # IFS=.-: 00:19:48.803 04:36:44 ftl -- scripts/common.sh@336 -- # read -ra ver1 00:19:48.803 04:36:44 ftl -- scripts/common.sh@337 -- # IFS=.-: 00:19:48.803 04:36:44 ftl -- scripts/common.sh@337 -- # read -ra ver2 00:19:48.803 04:36:44 ftl -- scripts/common.sh@338 -- # local 'op=<' 00:19:48.803 04:36:44 ftl -- scripts/common.sh@340 -- # ver1_l=2 00:19:48.803 04:36:44 ftl -- scripts/common.sh@341 -- # ver2_l=1 00:19:48.803 04:36:44 ftl -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:48.803 04:36:44 ftl -- scripts/common.sh@344 -- # case "$op" in 00:19:48.803 04:36:44 ftl -- scripts/common.sh@345 -- # : 1 00:19:48.803 04:36:44 ftl -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:48.803 04:36:44 ftl -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:48.803 04:36:44 ftl -- scripts/common.sh@365 -- # decimal 1 00:19:48.803 04:36:44 ftl -- scripts/common.sh@353 -- # local d=1 00:19:48.803 04:36:44 ftl -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:48.803 04:36:44 ftl -- scripts/common.sh@355 -- # echo 1 00:19:48.803 04:36:44 ftl -- scripts/common.sh@365 -- # ver1[v]=1 00:19:48.803 04:36:44 ftl -- scripts/common.sh@366 -- # decimal 2 00:19:48.803 04:36:44 ftl -- scripts/common.sh@353 -- # local d=2 00:19:48.803 04:36:44 ftl -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:48.803 04:36:44 ftl -- scripts/common.sh@355 -- # echo 2 00:19:48.803 04:36:44 ftl -- scripts/common.sh@366 -- # ver2[v]=2 00:19:48.803 04:36:44 ftl -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:48.803 04:36:44 ftl -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:48.803 04:36:44 ftl -- scripts/common.sh@368 -- # return 0 00:19:48.803 04:36:44 ftl -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:48.803 04:36:44 ftl -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:48.803 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:48.803 --rc genhtml_branch_coverage=1 00:19:48.803 --rc genhtml_function_coverage=1 00:19:48.803 --rc genhtml_legend=1 00:19:48.803 --rc geninfo_all_blocks=1 00:19:48.803 --rc geninfo_unexecuted_blocks=1 00:19:48.803 00:19:48.803 ' 00:19:48.803 04:36:44 ftl -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:48.803 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:48.803 --rc genhtml_branch_coverage=1 00:19:48.803 --rc genhtml_function_coverage=1 00:19:48.803 --rc genhtml_legend=1 00:19:48.803 --rc geninfo_all_blocks=1 00:19:48.803 --rc geninfo_unexecuted_blocks=1 00:19:48.803 00:19:48.803 ' 00:19:48.803 04:36:44 ftl -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:48.803 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:48.803 --rc genhtml_branch_coverage=1 00:19:48.803 --rc genhtml_function_coverage=1 00:19:48.803 --rc genhtml_legend=1 00:19:48.803 --rc geninfo_all_blocks=1 00:19:48.803 --rc geninfo_unexecuted_blocks=1 00:19:48.803 00:19:48.803 ' 00:19:48.803 04:36:44 ftl -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:48.803 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:48.803 --rc genhtml_branch_coverage=1 00:19:48.803 --rc genhtml_function_coverage=1 00:19:48.803 --rc genhtml_legend=1 00:19:48.803 --rc geninfo_all_blocks=1 00:19:48.803 --rc geninfo_unexecuted_blocks=1 00:19:48.803 00:19:48.803 ' 00:19:48.803 04:36:44 ftl -- ftl/ftl.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:19:48.803 04:36:44 ftl -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:19:48.803 04:36:44 ftl -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:19:48.803 04:36:44 ftl -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:19:48.803 04:36:44 ftl -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:19:48.803 04:36:44 ftl -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:19:48.803 04:36:44 ftl -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:48.803 04:36:44 ftl -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:19:48.803 04:36:44 ftl -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:19:48.803 04:36:44 ftl -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:48.803 04:36:44 ftl -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:48.803 04:36:44 ftl -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:19:48.803 04:36:44 ftl -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:19:48.803 04:36:44 ftl -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:19:48.803 04:36:44 ftl -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:19:48.803 04:36:44 ftl -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:19:48.803 04:36:44 ftl -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:19:48.803 04:36:44 ftl -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:48.803 04:36:44 ftl -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:48.803 04:36:44 ftl -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:19:48.803 04:36:44 ftl -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:19:48.803 04:36:44 ftl -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:19:48.803 04:36:44 ftl -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:19:48.803 04:36:44 ftl -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:19:48.803 04:36:44 ftl -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:19:48.803 04:36:44 ftl -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:19:48.803 04:36:44 ftl -- ftl/common.sh@23 -- # spdk_ini_pid= 00:19:48.803 04:36:44 ftl -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:48.803 04:36:44 ftl -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:48.803 04:36:44 ftl -- ftl/ftl.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:48.803 04:36:44 ftl -- ftl/ftl.sh@31 -- # trap at_ftl_exit SIGINT SIGTERM EXIT 00:19:48.803 04:36:44 ftl -- ftl/ftl.sh@34 -- # PCI_ALLOWED= 00:19:48.803 04:36:44 ftl -- ftl/ftl.sh@34 -- # PCI_BLOCKED= 00:19:48.803 04:36:44 ftl -- ftl/ftl.sh@34 -- # DRIVER_OVERRIDE= 00:19:48.803 04:36:44 ftl -- ftl/ftl.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:19:48.803 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:48.803 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:19:48.803 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:19:48.803 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:19:48.803 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:19:48.803 04:36:45 ftl -- ftl/ftl.sh@37 -- # spdk_tgt_pid=75027 00:19:48.803 04:36:45 ftl -- ftl/ftl.sh@38 -- # waitforlisten 75027 00:19:48.803 04:36:45 ftl -- ftl/ftl.sh@36 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:19:48.803 04:36:45 ftl -- common/autotest_common.sh@835 -- # '[' -z 75027 ']' 00:19:48.803 04:36:45 ftl -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:48.803 04:36:45 ftl -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:48.803 04:36:45 ftl -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:48.803 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:48.803 04:36:45 ftl -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:48.803 04:36:45 ftl -- common/autotest_common.sh@10 -- # set +x 00:19:48.803 [2024-11-27 04:36:45.309956] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:19:48.803 [2024-11-27 04:36:45.310226] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75027 ] 00:19:49.061 [2024-11-27 04:36:45.466764] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:49.061 [2024-11-27 04:36:45.568136] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:49.626 04:36:46 ftl -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:49.626 04:36:46 ftl -- common/autotest_common.sh@868 -- # return 0 00:19:49.626 04:36:46 ftl -- ftl/ftl.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_set_options -d 00:19:49.884 04:36:46 ftl -- ftl/ftl.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:19:50.818 04:36:47 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config -j /dev/fd/62 00:19:50.818 04:36:47 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:19:51.079 04:36:47 ftl -- ftl/ftl.sh@46 -- # cache_size=1310720 00:19:51.079 04:36:47 ftl -- ftl/ftl.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:19:51.079 04:36:47 ftl -- ftl/ftl.sh@47 -- # jq -r '.[] | select(.md_size==64 and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:19:51.341 04:36:47 ftl -- ftl/ftl.sh@47 -- # cache_disks=0000:00:10.0 00:19:51.341 04:36:47 ftl -- ftl/ftl.sh@48 -- # for disk in $cache_disks 00:19:51.341 04:36:47 ftl -- ftl/ftl.sh@49 -- # nv_cache=0000:00:10.0 00:19:51.341 04:36:47 ftl -- ftl/ftl.sh@50 -- # break 00:19:51.341 04:36:47 ftl -- ftl/ftl.sh@53 -- # '[' -z 0000:00:10.0 ']' 00:19:51.341 04:36:47 ftl -- ftl/ftl.sh@59 -- # base_size=1310720 00:19:51.341 04:36:47 ftl -- ftl/ftl.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:19:51.341 04:36:47 ftl -- ftl/ftl.sh@60 -- # jq -r '.[] | select(.driver_specific.nvme[0].pci_address!="0000:00:10.0" and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:19:51.600 04:36:48 ftl -- ftl/ftl.sh@60 -- # base_disks=0000:00:11.0 00:19:51.600 04:36:48 ftl -- ftl/ftl.sh@61 -- # for disk in $base_disks 00:19:51.600 04:36:48 ftl -- ftl/ftl.sh@62 -- # device=0000:00:11.0 00:19:51.600 04:36:48 ftl -- ftl/ftl.sh@63 -- # break 00:19:51.600 04:36:48 ftl -- ftl/ftl.sh@66 -- # killprocess 75027 00:19:51.600 04:36:48 ftl -- common/autotest_common.sh@954 -- # '[' -z 75027 ']' 00:19:51.600 04:36:48 ftl -- common/autotest_common.sh@958 -- # kill -0 75027 00:19:51.600 04:36:48 ftl -- common/autotest_common.sh@959 -- # uname 00:19:51.600 04:36:48 ftl -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:51.600 04:36:48 ftl -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75027 00:19:51.600 killing process with pid 75027 00:19:51.600 04:36:48 ftl -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:51.600 04:36:48 ftl -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:51.600 04:36:48 ftl -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75027' 00:19:51.600 04:36:48 ftl -- common/autotest_common.sh@973 -- # kill 75027 00:19:51.600 04:36:48 ftl -- common/autotest_common.sh@978 -- # wait 75027 00:19:52.974 04:36:49 ftl -- ftl/ftl.sh@68 -- # '[' -z 0000:00:11.0 ']' 00:19:52.974 04:36:49 ftl -- ftl/ftl.sh@73 -- # run_test ftl_fio_basic /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:19:52.974 04:36:49 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:19:52.974 04:36:49 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:52.974 04:36:49 ftl -- common/autotest_common.sh@10 -- # set +x 00:19:52.974 ************************************ 00:19:52.974 START TEST ftl_fio_basic 00:19:52.974 ************************************ 00:19:52.974 04:36:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:19:53.233 * Looking for test storage... 00:19:53.233 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:19:53.233 04:36:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:53.233 04:36:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1693 -- # lcov --version 00:19:53.233 04:36:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:53.233 04:36:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:53.233 04:36:49 ftl.ftl_fio_basic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:53.233 04:36:49 ftl.ftl_fio_basic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:53.233 04:36:49 ftl.ftl_fio_basic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:53.233 04:36:49 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # IFS=.-: 00:19:53.233 04:36:49 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # read -ra ver1 00:19:53.233 04:36:49 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # IFS=.-: 00:19:53.233 04:36:49 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # read -ra ver2 00:19:53.233 04:36:49 ftl.ftl_fio_basic -- scripts/common.sh@338 -- # local 'op=<' 00:19:53.233 04:36:49 ftl.ftl_fio_basic -- scripts/common.sh@340 -- # ver1_l=2 00:19:53.233 04:36:49 ftl.ftl_fio_basic -- scripts/common.sh@341 -- # ver2_l=1 00:19:53.233 04:36:49 ftl.ftl_fio_basic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:53.233 04:36:49 ftl.ftl_fio_basic -- scripts/common.sh@344 -- # case "$op" in 00:19:53.233 04:36:49 ftl.ftl_fio_basic -- scripts/common.sh@345 -- # : 1 00:19:53.233 04:36:49 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:53.233 04:36:49 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:53.233 04:36:49 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # decimal 1 00:19:53.233 04:36:49 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=1 00:19:53.233 04:36:49 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:53.233 04:36:49 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 1 00:19:53.233 04:36:49 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # ver1[v]=1 00:19:53.233 04:36:49 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # decimal 2 00:19:53.233 04:36:49 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=2 00:19:53.233 04:36:49 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:53.233 04:36:49 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 2 00:19:53.233 04:36:49 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # ver2[v]=2 00:19:53.233 04:36:49 ftl.ftl_fio_basic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:53.233 04:36:49 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:53.233 04:36:49 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # return 0 00:19:53.233 04:36:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:53.233 04:36:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:53.233 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:53.233 --rc genhtml_branch_coverage=1 00:19:53.233 --rc genhtml_function_coverage=1 00:19:53.233 --rc genhtml_legend=1 00:19:53.233 --rc geninfo_all_blocks=1 00:19:53.233 --rc geninfo_unexecuted_blocks=1 00:19:53.233 00:19:53.233 ' 00:19:53.233 04:36:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:53.233 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:53.233 --rc genhtml_branch_coverage=1 00:19:53.233 --rc genhtml_function_coverage=1 00:19:53.233 --rc genhtml_legend=1 00:19:53.233 --rc geninfo_all_blocks=1 00:19:53.233 --rc geninfo_unexecuted_blocks=1 00:19:53.233 00:19:53.233 ' 00:19:53.233 04:36:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:53.233 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:53.233 --rc genhtml_branch_coverage=1 00:19:53.233 --rc genhtml_function_coverage=1 00:19:53.233 --rc genhtml_legend=1 00:19:53.233 --rc geninfo_all_blocks=1 00:19:53.233 --rc geninfo_unexecuted_blocks=1 00:19:53.233 00:19:53.233 ' 00:19:53.233 04:36:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:53.233 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:53.233 --rc genhtml_branch_coverage=1 00:19:53.233 --rc genhtml_function_coverage=1 00:19:53.233 --rc genhtml_legend=1 00:19:53.233 --rc geninfo_all_blocks=1 00:19:53.233 --rc geninfo_unexecuted_blocks=1 00:19:53.233 00:19:53.233 ' 00:19:53.233 04:36:49 ftl.ftl_fio_basic -- ftl/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:19:53.233 04:36:49 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 00:19:53.233 04:36:49 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:19:53.233 04:36:49 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:19:53.233 04:36:49 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:19:53.233 04:36:49 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:19:53.233 04:36:49 ftl.ftl_fio_basic -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:53.233 04:36:49 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:19:53.233 04:36:49 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:19:53.233 04:36:49 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:53.233 04:36:49 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:53.233 04:36:49 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:19:53.233 04:36:49 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:19:53.233 04:36:49 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:19:53.233 04:36:49 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:19:53.233 04:36:49 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:19:53.233 04:36:49 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:19:53.233 04:36:49 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:53.233 04:36:49 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:53.233 04:36:49 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:19:53.233 04:36:49 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:19:53.233 04:36:49 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:19:53.233 04:36:49 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:19:53.233 04:36:49 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:19:53.233 04:36:49 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:19:53.233 04:36:49 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:19:53.233 04:36:49 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # spdk_ini_pid= 00:19:53.233 04:36:49 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:53.233 04:36:49 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:53.233 04:36:49 ftl.ftl_fio_basic -- ftl/fio.sh@11 -- # declare -A suite 00:19:53.233 04:36:49 ftl.ftl_fio_basic -- ftl/fio.sh@12 -- # suite['basic']='randw-verify randw-verify-j2 randw-verify-depth128' 00:19:53.233 04:36:49 ftl.ftl_fio_basic -- ftl/fio.sh@13 -- # suite['extended']='drive-prep randw-verify-qd128-ext randw-verify-qd2048-ext randw randr randrw unmap' 00:19:53.233 04:36:49 ftl.ftl_fio_basic -- ftl/fio.sh@14 -- # suite['nightly']='drive-prep randw-verify-qd256-nght randw-verify-qd256-nght randw-verify-qd256-nght' 00:19:53.233 04:36:49 ftl.ftl_fio_basic -- ftl/fio.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:53.233 04:36:49 ftl.ftl_fio_basic -- ftl/fio.sh@23 -- # device=0000:00:11.0 00:19:53.233 04:36:49 ftl.ftl_fio_basic -- ftl/fio.sh@24 -- # cache_device=0000:00:10.0 00:19:53.233 04:36:49 ftl.ftl_fio_basic -- ftl/fio.sh@25 -- # tests='randw-verify randw-verify-j2 randw-verify-depth128' 00:19:53.233 04:36:49 ftl.ftl_fio_basic -- ftl/fio.sh@26 -- # uuid= 00:19:53.234 04:36:49 ftl.ftl_fio_basic -- ftl/fio.sh@27 -- # timeout=240 00:19:53.234 04:36:49 ftl.ftl_fio_basic -- ftl/fio.sh@29 -- # [[ y != y ]] 00:19:53.234 04:36:49 ftl.ftl_fio_basic -- ftl/fio.sh@34 -- # '[' -z 'randw-verify randw-verify-j2 randw-verify-depth128' ']' 00:19:53.234 04:36:49 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # export FTL_BDEV_NAME=ftl0 00:19:53.234 04:36:49 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # FTL_BDEV_NAME=ftl0 00:19:53.234 04:36:49 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:53.234 04:36:49 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:53.234 04:36:49 ftl.ftl_fio_basic -- ftl/fio.sh@42 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:19:53.234 04:36:49 ftl.ftl_fio_basic -- ftl/fio.sh@45 -- # svcpid=75160 00:19:53.234 04:36:49 ftl.ftl_fio_basic -- ftl/fio.sh@46 -- # waitforlisten 75160 00:19:53.234 04:36:49 ftl.ftl_fio_basic -- common/autotest_common.sh@835 -- # '[' -z 75160 ']' 00:19:53.234 04:36:49 ftl.ftl_fio_basic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:53.234 04:36:49 ftl.ftl_fio_basic -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:53.234 04:36:49 ftl.ftl_fio_basic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:53.234 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:53.234 04:36:49 ftl.ftl_fio_basic -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:53.234 04:36:49 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:19:53.234 04:36:49 ftl.ftl_fio_basic -- ftl/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 7 00:19:53.234 [2024-11-27 04:36:49.752361] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:19:53.234 [2024-11-27 04:36:49.752830] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75160 ] 00:19:53.492 [2024-11-27 04:36:49.911097] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:53.492 [2024-11-27 04:36:49.998861] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:53.492 [2024-11-27 04:36:49.999177] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:53.492 [2024-11-27 04:36:49.999192] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:54.057 04:36:50 ftl.ftl_fio_basic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:54.058 04:36:50 ftl.ftl_fio_basic -- common/autotest_common.sh@868 -- # return 0 00:19:54.058 04:36:50 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:19:54.058 04:36:50 ftl.ftl_fio_basic -- ftl/common.sh@54 -- # local name=nvme0 00:19:54.058 04:36:50 ftl.ftl_fio_basic -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:19:54.058 04:36:50 ftl.ftl_fio_basic -- ftl/common.sh@56 -- # local size=103424 00:19:54.058 04:36:50 ftl.ftl_fio_basic -- ftl/common.sh@59 -- # local base_bdev 00:19:54.058 04:36:50 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:19:54.315 04:36:50 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:19:54.315 04:36:50 ftl.ftl_fio_basic -- ftl/common.sh@62 -- # local base_size 00:19:54.315 04:36:50 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:19:54.315 04:36:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:19:54.315 04:36:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:54.315 04:36:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:19:54.315 04:36:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:19:54.315 04:36:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:19:54.572 04:36:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:54.572 { 00:19:54.572 "name": "nvme0n1", 00:19:54.572 "aliases": [ 00:19:54.572 "3c4470d7-c9c9-4b19-9cc1-681b6e0d9eaf" 00:19:54.572 ], 00:19:54.572 "product_name": "NVMe disk", 00:19:54.572 "block_size": 4096, 00:19:54.572 "num_blocks": 1310720, 00:19:54.572 "uuid": "3c4470d7-c9c9-4b19-9cc1-681b6e0d9eaf", 00:19:54.572 "numa_id": -1, 00:19:54.572 "assigned_rate_limits": { 00:19:54.572 "rw_ios_per_sec": 0, 00:19:54.572 "rw_mbytes_per_sec": 0, 00:19:54.572 "r_mbytes_per_sec": 0, 00:19:54.572 "w_mbytes_per_sec": 0 00:19:54.572 }, 00:19:54.572 "claimed": false, 00:19:54.572 "zoned": false, 00:19:54.572 "supported_io_types": { 00:19:54.572 "read": true, 00:19:54.572 "write": true, 00:19:54.572 "unmap": true, 00:19:54.572 "flush": true, 00:19:54.572 "reset": true, 00:19:54.572 "nvme_admin": true, 00:19:54.572 "nvme_io": true, 00:19:54.572 "nvme_io_md": false, 00:19:54.572 "write_zeroes": true, 00:19:54.572 "zcopy": false, 00:19:54.572 "get_zone_info": false, 00:19:54.572 "zone_management": false, 00:19:54.572 "zone_append": false, 00:19:54.572 "compare": true, 00:19:54.572 "compare_and_write": false, 00:19:54.572 "abort": true, 00:19:54.572 "seek_hole": false, 00:19:54.572 "seek_data": false, 00:19:54.572 "copy": true, 00:19:54.572 "nvme_iov_md": false 00:19:54.572 }, 00:19:54.572 "driver_specific": { 00:19:54.573 "nvme": [ 00:19:54.573 { 00:19:54.573 "pci_address": "0000:00:11.0", 00:19:54.573 "trid": { 00:19:54.573 "trtype": "PCIe", 00:19:54.573 "traddr": "0000:00:11.0" 00:19:54.573 }, 00:19:54.573 "ctrlr_data": { 00:19:54.573 "cntlid": 0, 00:19:54.573 "vendor_id": "0x1b36", 00:19:54.573 "model_number": "QEMU NVMe Ctrl", 00:19:54.573 "serial_number": "12341", 00:19:54.573 "firmware_revision": "8.0.0", 00:19:54.573 "subnqn": "nqn.2019-08.org.qemu:12341", 00:19:54.573 "oacs": { 00:19:54.573 "security": 0, 00:19:54.573 "format": 1, 00:19:54.573 "firmware": 0, 00:19:54.573 "ns_manage": 1 00:19:54.573 }, 00:19:54.573 "multi_ctrlr": false, 00:19:54.573 "ana_reporting": false 00:19:54.573 }, 00:19:54.573 "vs": { 00:19:54.573 "nvme_version": "1.4" 00:19:54.573 }, 00:19:54.573 "ns_data": { 00:19:54.573 "id": 1, 00:19:54.573 "can_share": false 00:19:54.573 } 00:19:54.573 } 00:19:54.573 ], 00:19:54.573 "mp_policy": "active_passive" 00:19:54.573 } 00:19:54.573 } 00:19:54.573 ]' 00:19:54.573 04:36:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:54.573 04:36:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:19:54.573 04:36:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:54.573 04:36:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=1310720 00:19:54.573 04:36:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:19:54.573 04:36:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 5120 00:19:54.573 04:36:51 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # base_size=5120 00:19:54.573 04:36:51 ftl.ftl_fio_basic -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:19:54.573 04:36:51 ftl.ftl_fio_basic -- ftl/common.sh@67 -- # clear_lvols 00:19:54.573 04:36:51 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:19:54.573 04:36:51 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:19:54.829 04:36:51 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # stores= 00:19:54.829 04:36:51 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:19:55.086 04:36:51 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # lvs=288f3427-e058-49a5-b162-289f4c63740f 00:19:55.087 04:36:51 ftl.ftl_fio_basic -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 288f3427-e058-49a5-b162-289f4c63740f 00:19:55.087 04:36:51 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # split_bdev=e5d74e04-6442-4782-9d73-8ba3cd3297d4 00:19:55.344 04:36:51 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # create_nv_cache_bdev nvc0 0000:00:10.0 e5d74e04-6442-4782-9d73-8ba3cd3297d4 00:19:55.344 04:36:51 ftl.ftl_fio_basic -- ftl/common.sh@35 -- # local name=nvc0 00:19:55.344 04:36:51 ftl.ftl_fio_basic -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:19:55.344 04:36:51 ftl.ftl_fio_basic -- ftl/common.sh@37 -- # local base_bdev=e5d74e04-6442-4782-9d73-8ba3cd3297d4 00:19:55.344 04:36:51 ftl.ftl_fio_basic -- ftl/common.sh@38 -- # local cache_size= 00:19:55.344 04:36:51 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # get_bdev_size e5d74e04-6442-4782-9d73-8ba3cd3297d4 00:19:55.344 04:36:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=e5d74e04-6442-4782-9d73-8ba3cd3297d4 00:19:55.344 04:36:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:55.344 04:36:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:19:55.344 04:36:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:19:55.344 04:36:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b e5d74e04-6442-4782-9d73-8ba3cd3297d4 00:19:55.344 04:36:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:55.344 { 00:19:55.344 "name": "e5d74e04-6442-4782-9d73-8ba3cd3297d4", 00:19:55.344 "aliases": [ 00:19:55.344 "lvs/nvme0n1p0" 00:19:55.344 ], 00:19:55.344 "product_name": "Logical Volume", 00:19:55.344 "block_size": 4096, 00:19:55.344 "num_blocks": 26476544, 00:19:55.344 "uuid": "e5d74e04-6442-4782-9d73-8ba3cd3297d4", 00:19:55.344 "assigned_rate_limits": { 00:19:55.344 "rw_ios_per_sec": 0, 00:19:55.344 "rw_mbytes_per_sec": 0, 00:19:55.344 "r_mbytes_per_sec": 0, 00:19:55.344 "w_mbytes_per_sec": 0 00:19:55.344 }, 00:19:55.344 "claimed": false, 00:19:55.344 "zoned": false, 00:19:55.344 "supported_io_types": { 00:19:55.344 "read": true, 00:19:55.344 "write": true, 00:19:55.344 "unmap": true, 00:19:55.344 "flush": false, 00:19:55.344 "reset": true, 00:19:55.344 "nvme_admin": false, 00:19:55.344 "nvme_io": false, 00:19:55.344 "nvme_io_md": false, 00:19:55.344 "write_zeroes": true, 00:19:55.344 "zcopy": false, 00:19:55.344 "get_zone_info": false, 00:19:55.344 "zone_management": false, 00:19:55.344 "zone_append": false, 00:19:55.344 "compare": false, 00:19:55.344 "compare_and_write": false, 00:19:55.344 "abort": false, 00:19:55.344 "seek_hole": true, 00:19:55.344 "seek_data": true, 00:19:55.344 "copy": false, 00:19:55.344 "nvme_iov_md": false 00:19:55.344 }, 00:19:55.344 "driver_specific": { 00:19:55.344 "lvol": { 00:19:55.344 "lvol_store_uuid": "288f3427-e058-49a5-b162-289f4c63740f", 00:19:55.344 "base_bdev": "nvme0n1", 00:19:55.344 "thin_provision": true, 00:19:55.344 "num_allocated_clusters": 0, 00:19:55.344 "snapshot": false, 00:19:55.344 "clone": false, 00:19:55.344 "esnap_clone": false 00:19:55.344 } 00:19:55.344 } 00:19:55.344 } 00:19:55.344 ]' 00:19:55.344 04:36:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:55.344 04:36:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:19:55.344 04:36:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:55.601 04:36:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:19:55.601 04:36:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:19:55.601 04:36:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:19:55.601 04:36:51 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # local base_size=5171 00:19:55.601 04:36:51 ftl.ftl_fio_basic -- ftl/common.sh@44 -- # local nvc_bdev 00:19:55.601 04:36:51 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:19:55.859 04:36:52 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:19:55.859 04:36:52 ftl.ftl_fio_basic -- ftl/common.sh@47 -- # [[ -z '' ]] 00:19:55.859 04:36:52 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # get_bdev_size e5d74e04-6442-4782-9d73-8ba3cd3297d4 00:19:55.859 04:36:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=e5d74e04-6442-4782-9d73-8ba3cd3297d4 00:19:55.859 04:36:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:55.859 04:36:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:19:55.859 04:36:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:19:55.859 04:36:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b e5d74e04-6442-4782-9d73-8ba3cd3297d4 00:19:55.859 04:36:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:55.859 { 00:19:55.859 "name": "e5d74e04-6442-4782-9d73-8ba3cd3297d4", 00:19:55.859 "aliases": [ 00:19:55.859 "lvs/nvme0n1p0" 00:19:55.859 ], 00:19:55.859 "product_name": "Logical Volume", 00:19:55.859 "block_size": 4096, 00:19:55.859 "num_blocks": 26476544, 00:19:55.859 "uuid": "e5d74e04-6442-4782-9d73-8ba3cd3297d4", 00:19:55.859 "assigned_rate_limits": { 00:19:55.859 "rw_ios_per_sec": 0, 00:19:55.859 "rw_mbytes_per_sec": 0, 00:19:55.859 "r_mbytes_per_sec": 0, 00:19:55.859 "w_mbytes_per_sec": 0 00:19:55.859 }, 00:19:55.859 "claimed": false, 00:19:55.859 "zoned": false, 00:19:55.859 "supported_io_types": { 00:19:55.859 "read": true, 00:19:55.859 "write": true, 00:19:55.859 "unmap": true, 00:19:55.859 "flush": false, 00:19:55.859 "reset": true, 00:19:55.859 "nvme_admin": false, 00:19:55.859 "nvme_io": false, 00:19:55.859 "nvme_io_md": false, 00:19:55.859 "write_zeroes": true, 00:19:55.859 "zcopy": false, 00:19:55.859 "get_zone_info": false, 00:19:55.859 "zone_management": false, 00:19:55.859 "zone_append": false, 00:19:55.859 "compare": false, 00:19:55.859 "compare_and_write": false, 00:19:55.859 "abort": false, 00:19:55.859 "seek_hole": true, 00:19:55.859 "seek_data": true, 00:19:55.859 "copy": false, 00:19:55.859 "nvme_iov_md": false 00:19:55.859 }, 00:19:55.859 "driver_specific": { 00:19:55.859 "lvol": { 00:19:55.859 "lvol_store_uuid": "288f3427-e058-49a5-b162-289f4c63740f", 00:19:55.859 "base_bdev": "nvme0n1", 00:19:55.859 "thin_provision": true, 00:19:55.859 "num_allocated_clusters": 0, 00:19:55.859 "snapshot": false, 00:19:55.859 "clone": false, 00:19:55.859 "esnap_clone": false 00:19:55.859 } 00:19:55.859 } 00:19:55.859 } 00:19:55.859 ]' 00:19:55.859 04:36:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:55.859 04:36:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:19:55.859 04:36:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:56.116 04:36:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:19:56.116 04:36:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:19:56.116 04:36:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:19:56.116 04:36:52 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # cache_size=5171 00:19:56.116 04:36:52 ftl.ftl_fio_basic -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:19:56.116 04:36:52 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # nv_cache=nvc0n1p0 00:19:56.116 04:36:52 ftl.ftl_fio_basic -- ftl/fio.sh@51 -- # l2p_percentage=60 00:19:56.116 04:36:52 ftl.ftl_fio_basic -- ftl/fio.sh@52 -- # '[' -eq 1 ']' 00:19:56.116 /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh: line 52: [: -eq: unary operator expected 00:19:56.116 04:36:52 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # get_bdev_size e5d74e04-6442-4782-9d73-8ba3cd3297d4 00:19:56.116 04:36:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=e5d74e04-6442-4782-9d73-8ba3cd3297d4 00:19:56.116 04:36:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:56.116 04:36:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:19:56.116 04:36:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:19:56.116 04:36:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b e5d74e04-6442-4782-9d73-8ba3cd3297d4 00:19:56.373 04:36:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:56.373 { 00:19:56.373 "name": "e5d74e04-6442-4782-9d73-8ba3cd3297d4", 00:19:56.373 "aliases": [ 00:19:56.373 "lvs/nvme0n1p0" 00:19:56.373 ], 00:19:56.373 "product_name": "Logical Volume", 00:19:56.373 "block_size": 4096, 00:19:56.373 "num_blocks": 26476544, 00:19:56.373 "uuid": "e5d74e04-6442-4782-9d73-8ba3cd3297d4", 00:19:56.373 "assigned_rate_limits": { 00:19:56.373 "rw_ios_per_sec": 0, 00:19:56.373 "rw_mbytes_per_sec": 0, 00:19:56.373 "r_mbytes_per_sec": 0, 00:19:56.373 "w_mbytes_per_sec": 0 00:19:56.373 }, 00:19:56.373 "claimed": false, 00:19:56.373 "zoned": false, 00:19:56.373 "supported_io_types": { 00:19:56.373 "read": true, 00:19:56.373 "write": true, 00:19:56.373 "unmap": true, 00:19:56.373 "flush": false, 00:19:56.373 "reset": true, 00:19:56.373 "nvme_admin": false, 00:19:56.373 "nvme_io": false, 00:19:56.373 "nvme_io_md": false, 00:19:56.373 "write_zeroes": true, 00:19:56.373 "zcopy": false, 00:19:56.373 "get_zone_info": false, 00:19:56.373 "zone_management": false, 00:19:56.373 "zone_append": false, 00:19:56.373 "compare": false, 00:19:56.373 "compare_and_write": false, 00:19:56.373 "abort": false, 00:19:56.373 "seek_hole": true, 00:19:56.373 "seek_data": true, 00:19:56.373 "copy": false, 00:19:56.373 "nvme_iov_md": false 00:19:56.373 }, 00:19:56.373 "driver_specific": { 00:19:56.373 "lvol": { 00:19:56.373 "lvol_store_uuid": "288f3427-e058-49a5-b162-289f4c63740f", 00:19:56.374 "base_bdev": "nvme0n1", 00:19:56.374 "thin_provision": true, 00:19:56.374 "num_allocated_clusters": 0, 00:19:56.374 "snapshot": false, 00:19:56.374 "clone": false, 00:19:56.374 "esnap_clone": false 00:19:56.374 } 00:19:56.374 } 00:19:56.374 } 00:19:56.374 ]' 00:19:56.374 04:36:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:56.374 04:36:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:19:56.374 04:36:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:56.632 04:36:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:19:56.632 04:36:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:19:56.632 04:36:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:19:56.632 04:36:52 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # l2p_dram_size_mb=60 00:19:56.632 04:36:52 ftl.ftl_fio_basic -- ftl/fio.sh@58 -- # '[' -z '' ']' 00:19:56.632 04:36:52 ftl.ftl_fio_basic -- ftl/fio.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d e5d74e04-6442-4782-9d73-8ba3cd3297d4 -c nvc0n1p0 --l2p_dram_limit 60 00:19:56.632 [2024-11-27 04:36:53.147864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.632 [2024-11-27 04:36:53.147919] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:56.632 [2024-11-27 04:36:53.147936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:56.632 [2024-11-27 04:36:53.147944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.632 [2024-11-27 04:36:53.148018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.632 [2024-11-27 04:36:53.148028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:56.632 [2024-11-27 04:36:53.148040] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:19:56.632 [2024-11-27 04:36:53.148047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.632 [2024-11-27 04:36:53.148077] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:56.632 [2024-11-27 04:36:53.148840] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:56.632 [2024-11-27 04:36:53.148873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.632 [2024-11-27 04:36:53.148881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:56.632 [2024-11-27 04:36:53.148891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.798 ms 00:19:56.632 [2024-11-27 04:36:53.148899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.632 [2024-11-27 04:36:53.148969] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 50a972e2-e6c8-4826-947b-b963105a6299 00:19:56.632 [2024-11-27 04:36:53.150047] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.632 [2024-11-27 04:36:53.150180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:19:56.632 [2024-11-27 04:36:53.150196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:19:56.632 [2024-11-27 04:36:53.150206] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.632 [2024-11-27 04:36:53.155492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.632 [2024-11-27 04:36:53.155526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:56.632 [2024-11-27 04:36:53.155535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.216 ms 00:19:56.632 [2024-11-27 04:36:53.155549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.632 [2024-11-27 04:36:53.155645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.632 [2024-11-27 04:36:53.155656] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:56.632 [2024-11-27 04:36:53.155663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:19:56.632 [2024-11-27 04:36:53.155676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.632 [2024-11-27 04:36:53.155717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.632 [2024-11-27 04:36:53.155740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:56.632 [2024-11-27 04:36:53.155749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:19:56.632 [2024-11-27 04:36:53.155758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.632 [2024-11-27 04:36:53.155784] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:56.632 [2024-11-27 04:36:53.159440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.632 [2024-11-27 04:36:53.159468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:56.632 [2024-11-27 04:36:53.159482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.659 ms 00:19:56.632 [2024-11-27 04:36:53.159490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.632 [2024-11-27 04:36:53.159527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.632 [2024-11-27 04:36:53.159535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:56.632 [2024-11-27 04:36:53.159544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:19:56.632 [2024-11-27 04:36:53.159552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.632 [2024-11-27 04:36:53.159590] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:19:56.632 [2024-11-27 04:36:53.159742] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:56.632 [2024-11-27 04:36:53.159757] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:56.632 [2024-11-27 04:36:53.159768] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:56.632 [2024-11-27 04:36:53.159779] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:56.632 [2024-11-27 04:36:53.159788] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:56.632 [2024-11-27 04:36:53.159797] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:19:56.632 [2024-11-27 04:36:53.159805] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:56.632 [2024-11-27 04:36:53.159813] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:56.632 [2024-11-27 04:36:53.159830] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:56.632 [2024-11-27 04:36:53.159842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.632 [2024-11-27 04:36:53.159850] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:56.632 [2024-11-27 04:36:53.159859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.252 ms 00:19:56.632 [2024-11-27 04:36:53.159867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.632 [2024-11-27 04:36:53.159968] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.632 [2024-11-27 04:36:53.159977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:56.632 [2024-11-27 04:36:53.159986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:19:56.632 [2024-11-27 04:36:53.159993] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.632 [2024-11-27 04:36:53.160106] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:56.632 [2024-11-27 04:36:53.160117] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:56.632 [2024-11-27 04:36:53.160127] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:56.632 [2024-11-27 04:36:53.160134] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:56.632 [2024-11-27 04:36:53.160143] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:56.632 [2024-11-27 04:36:53.160150] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:56.632 [2024-11-27 04:36:53.160158] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:19:56.632 [2024-11-27 04:36:53.160165] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:56.632 [2024-11-27 04:36:53.160174] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:19:56.632 [2024-11-27 04:36:53.160180] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:56.632 [2024-11-27 04:36:53.160188] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:56.632 [2024-11-27 04:36:53.160196] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:19:56.632 [2024-11-27 04:36:53.160204] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:56.632 [2024-11-27 04:36:53.160213] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:56.632 [2024-11-27 04:36:53.160221] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:19:56.632 [2024-11-27 04:36:53.160228] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:56.632 [2024-11-27 04:36:53.160239] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:56.632 [2024-11-27 04:36:53.160246] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:19:56.632 [2024-11-27 04:36:53.160254] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:56.632 [2024-11-27 04:36:53.160261] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:56.632 [2024-11-27 04:36:53.160269] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:19:56.632 [2024-11-27 04:36:53.160276] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:56.632 [2024-11-27 04:36:53.160284] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:56.632 [2024-11-27 04:36:53.160290] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:19:56.632 [2024-11-27 04:36:53.160298] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:56.632 [2024-11-27 04:36:53.160304] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:56.632 [2024-11-27 04:36:53.160312] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:19:56.632 [2024-11-27 04:36:53.160319] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:56.632 [2024-11-27 04:36:53.160327] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:56.632 [2024-11-27 04:36:53.160333] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:19:56.633 [2024-11-27 04:36:53.160341] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:56.633 [2024-11-27 04:36:53.160347] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:56.633 [2024-11-27 04:36:53.160357] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:19:56.633 [2024-11-27 04:36:53.160375] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:56.633 [2024-11-27 04:36:53.160383] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:56.633 [2024-11-27 04:36:53.160389] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:19:56.633 [2024-11-27 04:36:53.160397] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:56.633 [2024-11-27 04:36:53.160404] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:56.633 [2024-11-27 04:36:53.160412] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:19:56.633 [2024-11-27 04:36:53.160418] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:56.633 [2024-11-27 04:36:53.160427] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:56.633 [2024-11-27 04:36:53.160434] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:19:56.633 [2024-11-27 04:36:53.160441] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:56.633 [2024-11-27 04:36:53.160448] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:56.633 [2024-11-27 04:36:53.160456] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:56.633 [2024-11-27 04:36:53.160465] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:56.633 [2024-11-27 04:36:53.160474] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:56.633 [2024-11-27 04:36:53.160481] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:56.633 [2024-11-27 04:36:53.160491] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:56.633 [2024-11-27 04:36:53.160497] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:56.633 [2024-11-27 04:36:53.160507] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:56.633 [2024-11-27 04:36:53.160513] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:56.633 [2024-11-27 04:36:53.160521] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:56.633 [2024-11-27 04:36:53.160531] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:56.633 [2024-11-27 04:36:53.160542] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:56.633 [2024-11-27 04:36:53.160550] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:19:56.633 [2024-11-27 04:36:53.160558] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:19:56.633 [2024-11-27 04:36:53.160565] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:19:56.633 [2024-11-27 04:36:53.160573] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:19:56.633 [2024-11-27 04:36:53.160580] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:19:56.633 [2024-11-27 04:36:53.160588] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:19:56.633 [2024-11-27 04:36:53.160595] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:19:56.633 [2024-11-27 04:36:53.160603] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:19:56.633 [2024-11-27 04:36:53.160610] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:19:56.633 [2024-11-27 04:36:53.160622] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:19:56.633 [2024-11-27 04:36:53.160629] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:19:56.633 [2024-11-27 04:36:53.160637] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:19:56.633 [2024-11-27 04:36:53.160644] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:19:56.633 [2024-11-27 04:36:53.160653] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:19:56.633 [2024-11-27 04:36:53.160660] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:56.633 [2024-11-27 04:36:53.160671] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:56.633 [2024-11-27 04:36:53.160678] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:56.633 [2024-11-27 04:36:53.160687] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:56.633 [2024-11-27 04:36:53.160694] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:56.633 [2024-11-27 04:36:53.160703] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:56.633 [2024-11-27 04:36:53.160710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.633 [2024-11-27 04:36:53.160719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:56.633 [2024-11-27 04:36:53.160738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.674 ms 00:19:56.633 [2024-11-27 04:36:53.160747] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.633 [2024-11-27 04:36:53.160822] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:19:56.633 [2024-11-27 04:36:53.160836] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:19:59.202 [2024-11-27 04:36:55.727666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.202 [2024-11-27 04:36:55.727915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:19:59.202 [2024-11-27 04:36:55.727988] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2566.836 ms 00:19:59.202 [2024-11-27 04:36:55.728017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.202 [2024-11-27 04:36:55.754235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.202 [2024-11-27 04:36:55.754444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:59.202 [2024-11-27 04:36:55.754508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.963 ms 00:19:59.202 [2024-11-27 04:36:55.754558] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.202 [2024-11-27 04:36:55.754739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.202 [2024-11-27 04:36:55.754776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:59.202 [2024-11-27 04:36:55.754835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.077 ms 00:19:59.202 [2024-11-27 04:36:55.754889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.460 [2024-11-27 04:36:55.793524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.460 [2024-11-27 04:36:55.793742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:59.460 [2024-11-27 04:36:55.793814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.544 ms 00:19:59.460 [2024-11-27 04:36:55.793842] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.460 [2024-11-27 04:36:55.793907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.460 [2024-11-27 04:36:55.794043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:59.460 [2024-11-27 04:36:55.794102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:59.460 [2024-11-27 04:36:55.794123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.460 [2024-11-27 04:36:55.794525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.460 [2024-11-27 04:36:55.794633] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:59.460 [2024-11-27 04:36:55.794686] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.318 ms 00:19:59.460 [2024-11-27 04:36:55.794755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.460 [2024-11-27 04:36:55.794917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.460 [2024-11-27 04:36:55.794943] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:59.460 [2024-11-27 04:36:55.794990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.114 ms 00:19:59.460 [2024-11-27 04:36:55.795015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.460 [2024-11-27 04:36:55.809588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.460 [2024-11-27 04:36:55.809764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:59.460 [2024-11-27 04:36:55.809819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.475 ms 00:19:59.460 [2024-11-27 04:36:55.809844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.460 [2024-11-27 04:36:55.821253] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:19:59.460 [2024-11-27 04:36:55.835902] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.460 [2024-11-27 04:36:55.836036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:59.460 [2024-11-27 04:36:55.836094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.938 ms 00:19:59.460 [2024-11-27 04:36:55.836118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.460 [2024-11-27 04:36:55.889474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.460 [2024-11-27 04:36:55.889649] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:19:59.460 [2024-11-27 04:36:55.889712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 53.236 ms 00:19:59.460 [2024-11-27 04:36:55.889783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.460 [2024-11-27 04:36:55.889988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.460 [2024-11-27 04:36:55.890022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:59.460 [2024-11-27 04:36:55.890037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.140 ms 00:19:59.460 [2024-11-27 04:36:55.890046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.460 [2024-11-27 04:36:55.913509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.460 [2024-11-27 04:36:55.913669] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:19:59.460 [2024-11-27 04:36:55.913733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.400 ms 00:19:59.460 [2024-11-27 04:36:55.913758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.460 [2024-11-27 04:36:55.936231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.460 [2024-11-27 04:36:55.936357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:19:59.460 [2024-11-27 04:36:55.936427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.389 ms 00:19:59.460 [2024-11-27 04:36:55.936447] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.460 [2024-11-27 04:36:55.937051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.460 [2024-11-27 04:36:55.937077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:59.460 [2024-11-27 04:36:55.937088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.554 ms 00:19:59.460 [2024-11-27 04:36:55.937096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.460 [2024-11-27 04:36:56.006004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.460 [2024-11-27 04:36:56.006063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:19:59.460 [2024-11-27 04:36:56.006084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 68.861 ms 00:19:59.460 [2024-11-27 04:36:56.006092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.460 [2024-11-27 04:36:56.031350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.460 [2024-11-27 04:36:56.031566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:19:59.460 [2024-11-27 04:36:56.031590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.153 ms 00:19:59.460 [2024-11-27 04:36:56.031598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.719 [2024-11-27 04:36:56.056496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.719 [2024-11-27 04:36:56.056547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:19:59.719 [2024-11-27 04:36:56.056561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.845 ms 00:19:59.719 [2024-11-27 04:36:56.056569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.719 [2024-11-27 04:36:56.080713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.719 [2024-11-27 04:36:56.080787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:59.719 [2024-11-27 04:36:56.080802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.081 ms 00:19:59.719 [2024-11-27 04:36:56.080810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.719 [2024-11-27 04:36:56.080866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.719 [2024-11-27 04:36:56.080875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:59.719 [2024-11-27 04:36:56.080891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:19:59.719 [2024-11-27 04:36:56.080899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.719 [2024-11-27 04:36:56.080998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.719 [2024-11-27 04:36:56.081008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:59.719 [2024-11-27 04:36:56.081018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:19:59.719 [2024-11-27 04:36:56.081026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.719 [2024-11-27 04:36:56.081978] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2933.657 ms, result 0 00:19:59.719 { 00:19:59.719 "name": "ftl0", 00:19:59.719 "uuid": "50a972e2-e6c8-4826-947b-b963105a6299" 00:19:59.719 } 00:19:59.719 04:36:56 ftl.ftl_fio_basic -- ftl/fio.sh@65 -- # waitforbdev ftl0 00:19:59.719 04:36:56 ftl.ftl_fio_basic -- common/autotest_common.sh@903 -- # local bdev_name=ftl0 00:19:59.719 04:36:56 ftl.ftl_fio_basic -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:19:59.719 04:36:56 ftl.ftl_fio_basic -- common/autotest_common.sh@905 -- # local i 00:19:59.719 04:36:56 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:19:59.719 04:36:56 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:19:59.719 04:36:56 ftl.ftl_fio_basic -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:19:59.977 04:36:56 ftl.ftl_fio_basic -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:19:59.977 [ 00:19:59.977 { 00:19:59.977 "name": "ftl0", 00:19:59.977 "aliases": [ 00:19:59.977 "50a972e2-e6c8-4826-947b-b963105a6299" 00:19:59.977 ], 00:19:59.977 "product_name": "FTL disk", 00:19:59.977 "block_size": 4096, 00:19:59.977 "num_blocks": 20971520, 00:19:59.977 "uuid": "50a972e2-e6c8-4826-947b-b963105a6299", 00:19:59.977 "assigned_rate_limits": { 00:19:59.977 "rw_ios_per_sec": 0, 00:19:59.977 "rw_mbytes_per_sec": 0, 00:19:59.977 "r_mbytes_per_sec": 0, 00:19:59.977 "w_mbytes_per_sec": 0 00:19:59.977 }, 00:19:59.977 "claimed": false, 00:19:59.977 "zoned": false, 00:19:59.977 "supported_io_types": { 00:19:59.977 "read": true, 00:19:59.977 "write": true, 00:19:59.977 "unmap": true, 00:19:59.977 "flush": true, 00:19:59.977 "reset": false, 00:19:59.977 "nvme_admin": false, 00:19:59.977 "nvme_io": false, 00:19:59.977 "nvme_io_md": false, 00:19:59.977 "write_zeroes": true, 00:19:59.977 "zcopy": false, 00:19:59.977 "get_zone_info": false, 00:19:59.977 "zone_management": false, 00:19:59.977 "zone_append": false, 00:19:59.977 "compare": false, 00:19:59.977 "compare_and_write": false, 00:19:59.977 "abort": false, 00:19:59.977 "seek_hole": false, 00:19:59.977 "seek_data": false, 00:19:59.977 "copy": false, 00:19:59.977 "nvme_iov_md": false 00:19:59.977 }, 00:19:59.977 "driver_specific": { 00:19:59.977 "ftl": { 00:19:59.977 "base_bdev": "e5d74e04-6442-4782-9d73-8ba3cd3297d4", 00:19:59.977 "cache": "nvc0n1p0" 00:19:59.977 } 00:19:59.977 } 00:19:59.977 } 00:19:59.977 ] 00:19:59.977 04:36:56 ftl.ftl_fio_basic -- common/autotest_common.sh@911 -- # return 0 00:19:59.977 04:36:56 ftl.ftl_fio_basic -- ftl/fio.sh@68 -- # echo '{"subsystems": [' 00:19:59.977 04:36:56 ftl.ftl_fio_basic -- ftl/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:20:00.235 04:36:56 ftl.ftl_fio_basic -- ftl/fio.sh@70 -- # echo ']}' 00:20:00.235 04:36:56 ftl.ftl_fio_basic -- ftl/fio.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:20:00.493 [2024-11-27 04:36:56.970831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.493 [2024-11-27 04:36:56.970884] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:00.493 [2024-11-27 04:36:56.970897] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:00.493 [2024-11-27 04:36:56.970909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.493 [2024-11-27 04:36:56.970945] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:00.493 [2024-11-27 04:36:56.973656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.493 [2024-11-27 04:36:56.973830] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:00.493 [2024-11-27 04:36:56.973852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.689 ms 00:20:00.493 [2024-11-27 04:36:56.973860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.493 [2024-11-27 04:36:56.974280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.493 [2024-11-27 04:36:56.974297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:00.493 [2024-11-27 04:36:56.974308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.382 ms 00:20:00.494 [2024-11-27 04:36:56.974316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.494 [2024-11-27 04:36:56.977564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.494 [2024-11-27 04:36:56.977688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:00.494 [2024-11-27 04:36:56.977707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.217 ms 00:20:00.494 [2024-11-27 04:36:56.977716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.494 [2024-11-27 04:36:56.983844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.494 [2024-11-27 04:36:56.983884] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:00.494 [2024-11-27 04:36:56.983898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.085 ms 00:20:00.494 [2024-11-27 04:36:56.983906] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.494 [2024-11-27 04:36:57.007884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.494 [2024-11-27 04:36:57.007937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:00.494 [2024-11-27 04:36:57.007964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.867 ms 00:20:00.494 [2024-11-27 04:36:57.007972] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.494 [2024-11-27 04:36:57.022798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.494 [2024-11-27 04:36:57.022848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:00.494 [2024-11-27 04:36:57.022866] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.719 ms 00:20:00.494 [2024-11-27 04:36:57.022874] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.494 [2024-11-27 04:36:57.023073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.494 [2024-11-27 04:36:57.023084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:00.494 [2024-11-27 04:36:57.023094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.147 ms 00:20:00.494 [2024-11-27 04:36:57.023102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.494 [2024-11-27 04:36:57.046551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.494 [2024-11-27 04:36:57.046601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:00.494 [2024-11-27 04:36:57.046617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.419 ms 00:20:00.494 [2024-11-27 04:36:57.046625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.494 [2024-11-27 04:36:57.069393] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.494 [2024-11-27 04:36:57.069439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:00.494 [2024-11-27 04:36:57.069454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.711 ms 00:20:00.494 [2024-11-27 04:36:57.069462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.753 [2024-11-27 04:36:57.092904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.753 [2024-11-27 04:36:57.092967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:00.753 [2024-11-27 04:36:57.092987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.381 ms 00:20:00.753 [2024-11-27 04:36:57.092999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.753 [2024-11-27 04:36:57.117451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.753 [2024-11-27 04:36:57.117509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:00.753 [2024-11-27 04:36:57.117524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.306 ms 00:20:00.753 [2024-11-27 04:36:57.117532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.753 [2024-11-27 04:36:57.117589] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:00.753 [2024-11-27 04:36:57.117604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:00.753 [2024-11-27 04:36:57.117616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:00.753 [2024-11-27 04:36:57.117624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:00.753 [2024-11-27 04:36:57.117633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:00.753 [2024-11-27 04:36:57.117641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:00.753 [2024-11-27 04:36:57.117649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:00.753 [2024-11-27 04:36:57.117657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:00.753 [2024-11-27 04:36:57.117669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:00.753 [2024-11-27 04:36:57.117677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:00.753 [2024-11-27 04:36:57.117686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:00.753 [2024-11-27 04:36:57.117693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:00.753 [2024-11-27 04:36:57.117702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:00.753 [2024-11-27 04:36:57.117710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:00.753 [2024-11-27 04:36:57.117719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:00.753 [2024-11-27 04:36:57.117737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:00.753 [2024-11-27 04:36:57.117746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:00.753 [2024-11-27 04:36:57.117753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:00.754 [2024-11-27 04:36:57.117763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:00.754 [2024-11-27 04:36:57.117770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:00.754 [2024-11-27 04:36:57.117779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:00.754 [2024-11-27 04:36:57.117786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:00.754 [2024-11-27 04:36:57.117797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:00.754 [2024-11-27 04:36:57.117805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:00.754 [2024-11-27 04:36:57.117815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:00.754 [2024-11-27 04:36:57.117823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:00.754 [2024-11-27 04:36:57.117832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:00.754 [2024-11-27 04:36:57.117839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:00.754 [2024-11-27 04:36:57.117849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:00.754 [2024-11-27 04:36:57.117856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:00.754 [2024-11-27 04:36:57.117866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:00.754 [2024-11-27 04:36:57.117873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:00.754 [2024-11-27 04:36:57.117889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:00.754 [2024-11-27 04:36:57.117897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:00.754 [2024-11-27 04:36:57.117911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:00.754 [2024-11-27 04:36:57.117923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:00.754 [2024-11-27 04:36:57.117936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:00.754 [2024-11-27 04:36:57.117948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:00.754 [2024-11-27 04:36:57.117962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:00.754 [2024-11-27 04:36:57.117970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:00.754 [2024-11-27 04:36:57.117980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:00.754 [2024-11-27 04:36:57.117988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:00.754 [2024-11-27 04:36:57.117997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:00.754 [2024-11-27 04:36:57.118004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:00.754 [2024-11-27 04:36:57.118013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:00.754 [2024-11-27 04:36:57.118020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:00.754 [2024-11-27 04:36:57.118029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:00.754 [2024-11-27 04:36:57.118036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:00.754 [2024-11-27 04:36:57.118046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:00.754 [2024-11-27 04:36:57.118053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:00.754 [2024-11-27 04:36:57.118062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:00.754 [2024-11-27 04:36:57.118070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:00.754 [2024-11-27 04:36:57.118079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:00.754 [2024-11-27 04:36:57.118086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:00.754 [2024-11-27 04:36:57.118095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:00.754 [2024-11-27 04:36:57.118102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:00.754 [2024-11-27 04:36:57.118112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:00.754 [2024-11-27 04:36:57.118120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:00.754 [2024-11-27 04:36:57.118129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:00.754 [2024-11-27 04:36:57.118136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:00.754 [2024-11-27 04:36:57.118145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:00.754 [2024-11-27 04:36:57.118152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:00.754 [2024-11-27 04:36:57.118161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:00.754 [2024-11-27 04:36:57.118168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:00.754 [2024-11-27 04:36:57.118182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:00.754 [2024-11-27 04:36:57.118189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:00.754 [2024-11-27 04:36:57.118198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:00.754 [2024-11-27 04:36:57.118206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:00.754 [2024-11-27 04:36:57.118215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:00.754 [2024-11-27 04:36:57.118222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:00.754 [2024-11-27 04:36:57.118230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:00.754 [2024-11-27 04:36:57.118237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:00.754 [2024-11-27 04:36:57.118247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:00.754 [2024-11-27 04:36:57.118255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:00.754 [2024-11-27 04:36:57.118264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:00.754 [2024-11-27 04:36:57.118272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:00.754 [2024-11-27 04:36:57.118280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:00.754 [2024-11-27 04:36:57.118287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:00.754 [2024-11-27 04:36:57.118296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:00.754 [2024-11-27 04:36:57.118303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:00.754 [2024-11-27 04:36:57.118311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:00.754 [2024-11-27 04:36:57.118319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:00.754 [2024-11-27 04:36:57.118341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:00.754 [2024-11-27 04:36:57.118356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:00.754 [2024-11-27 04:36:57.118365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:00.754 [2024-11-27 04:36:57.118372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:00.754 [2024-11-27 04:36:57.118381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:00.754 [2024-11-27 04:36:57.118389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:00.754 [2024-11-27 04:36:57.118400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:00.754 [2024-11-27 04:36:57.118407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:00.754 [2024-11-27 04:36:57.118416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:00.754 [2024-11-27 04:36:57.118423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:00.754 [2024-11-27 04:36:57.118432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:00.754 [2024-11-27 04:36:57.118439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:00.754 [2024-11-27 04:36:57.118447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:00.754 [2024-11-27 04:36:57.118454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:00.754 [2024-11-27 04:36:57.118466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:00.754 [2024-11-27 04:36:57.118473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:00.754 [2024-11-27 04:36:57.118482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:00.754 [2024-11-27 04:36:57.118489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:00.754 [2024-11-27 04:36:57.118499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:00.754 [2024-11-27 04:36:57.118515] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:00.754 [2024-11-27 04:36:57.118524] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 50a972e2-e6c8-4826-947b-b963105a6299 00:20:00.754 [2024-11-27 04:36:57.118532] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:00.754 [2024-11-27 04:36:57.118542] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:00.754 [2024-11-27 04:36:57.118551] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:00.754 [2024-11-27 04:36:57.118562] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:00.754 [2024-11-27 04:36:57.118568] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:00.754 [2024-11-27 04:36:57.118577] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:00.755 [2024-11-27 04:36:57.118584] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:00.755 [2024-11-27 04:36:57.118592] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:00.755 [2024-11-27 04:36:57.118598] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:00.755 [2024-11-27 04:36:57.118607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.755 [2024-11-27 04:36:57.118615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:00.755 [2024-11-27 04:36:57.118625] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.020 ms 00:20:00.755 [2024-11-27 04:36:57.118632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.755 [2024-11-27 04:36:57.131889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.755 [2024-11-27 04:36:57.131944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:00.755 [2024-11-27 04:36:57.131958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.204 ms 00:20:00.755 [2024-11-27 04:36:57.131966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.755 [2024-11-27 04:36:57.132351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.755 [2024-11-27 04:36:57.132360] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:00.755 [2024-11-27 04:36:57.132371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.320 ms 00:20:00.755 [2024-11-27 04:36:57.132378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.755 [2024-11-27 04:36:57.176540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:00.755 [2024-11-27 04:36:57.176595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:00.755 [2024-11-27 04:36:57.176609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:00.755 [2024-11-27 04:36:57.176617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.755 [2024-11-27 04:36:57.176689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:00.755 [2024-11-27 04:36:57.176698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:00.755 [2024-11-27 04:36:57.176707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:00.755 [2024-11-27 04:36:57.176714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.755 [2024-11-27 04:36:57.176850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:00.755 [2024-11-27 04:36:57.176864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:00.755 [2024-11-27 04:36:57.176875] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:00.755 [2024-11-27 04:36:57.176882] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.755 [2024-11-27 04:36:57.176909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:00.755 [2024-11-27 04:36:57.176916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:00.755 [2024-11-27 04:36:57.176926] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:00.755 [2024-11-27 04:36:57.176932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.755 [2024-11-27 04:36:57.259233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:00.755 [2024-11-27 04:36:57.259291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:00.755 [2024-11-27 04:36:57.259305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:00.755 [2024-11-27 04:36:57.259314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.755 [2024-11-27 04:36:57.322229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:00.755 [2024-11-27 04:36:57.322275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:00.755 [2024-11-27 04:36:57.322290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:00.755 [2024-11-27 04:36:57.322298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.755 [2024-11-27 04:36:57.322378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:00.755 [2024-11-27 04:36:57.322387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:00.755 [2024-11-27 04:36:57.322400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:00.755 [2024-11-27 04:36:57.322407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.755 [2024-11-27 04:36:57.322479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:00.755 [2024-11-27 04:36:57.322488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:00.755 [2024-11-27 04:36:57.322497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:00.755 [2024-11-27 04:36:57.322505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.755 [2024-11-27 04:36:57.322604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:00.755 [2024-11-27 04:36:57.322614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:00.755 [2024-11-27 04:36:57.322626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:00.755 [2024-11-27 04:36:57.322633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.755 [2024-11-27 04:36:57.322677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:00.755 [2024-11-27 04:36:57.322686] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:00.755 [2024-11-27 04:36:57.322695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:00.755 [2024-11-27 04:36:57.322702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.755 [2024-11-27 04:36:57.322758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:00.755 [2024-11-27 04:36:57.322767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:00.755 [2024-11-27 04:36:57.322777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:00.755 [2024-11-27 04:36:57.322786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.755 [2024-11-27 04:36:57.322833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:00.755 [2024-11-27 04:36:57.322842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:00.755 [2024-11-27 04:36:57.322852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:00.755 [2024-11-27 04:36:57.322859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.755 [2024-11-27 04:36:57.323014] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 352.157 ms, result 0 00:20:00.755 true 00:20:01.013 04:36:57 ftl.ftl_fio_basic -- ftl/fio.sh@75 -- # killprocess 75160 00:20:01.013 04:36:57 ftl.ftl_fio_basic -- common/autotest_common.sh@954 -- # '[' -z 75160 ']' 00:20:01.013 04:36:57 ftl.ftl_fio_basic -- common/autotest_common.sh@958 -- # kill -0 75160 00:20:01.013 04:36:57 ftl.ftl_fio_basic -- common/autotest_common.sh@959 -- # uname 00:20:01.013 04:36:57 ftl.ftl_fio_basic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:01.013 04:36:57 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75160 00:20:01.013 04:36:57 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:01.013 04:36:57 ftl.ftl_fio_basic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:01.013 killing process with pid 75160 00:20:01.013 04:36:57 ftl.ftl_fio_basic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75160' 00:20:01.013 04:36:57 ftl.ftl_fio_basic -- common/autotest_common.sh@973 -- # kill 75160 00:20:01.013 04:36:57 ftl.ftl_fio_basic -- common/autotest_common.sh@978 -- # wait 75160 00:20:13.215 04:37:08 ftl.ftl_fio_basic -- ftl/fio.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:20:13.215 04:37:08 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:20:13.215 04:37:08 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify 00:20:13.215 04:37:08 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:13.215 04:37:08 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:20:13.215 04:37:08 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:20:13.215 04:37:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:20:13.215 04:37:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:20:13.215 04:37:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:13.215 04:37:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:20:13.215 04:37:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:13.215 04:37:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:20:13.215 04:37:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:20:13.215 04:37:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:20:13.215 04:37:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:13.215 04:37:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:20:13.215 04:37:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:20:13.215 04:37:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:20:13.215 04:37:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:20:13.215 04:37:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:20:13.215 04:37:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:13.215 04:37:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:20:13.215 test: (g=0): rw=randwrite, bs=(R) 68.0KiB-68.0KiB, (W) 68.0KiB-68.0KiB, (T) 68.0KiB-68.0KiB, ioengine=spdk_bdev, iodepth=1 00:20:13.215 fio-3.35 00:20:13.215 Starting 1 thread 00:20:15.742 00:20:15.742 test: (groupid=0, jobs=1): err= 0: pid=75340: Wed Nov 27 04:37:12 2024 00:20:15.742 read: IOPS=1369, BW=90.9MiB/s (95.4MB/s)(255MiB/2799msec) 00:20:15.743 slat (nsec): min=2972, max=21486, avg=3939.59, stdev=1869.03 00:20:15.743 clat (usec): min=235, max=1403, avg=331.19, stdev=42.21 00:20:15.743 lat (usec): min=239, max=1407, avg=335.13, stdev=42.98 00:20:15.743 clat percentiles (usec): 00:20:15.743 | 1.00th=[ 265], 5.00th=[ 302], 10.00th=[ 306], 20.00th=[ 314], 00:20:15.743 | 30.00th=[ 318], 40.00th=[ 322], 50.00th=[ 326], 60.00th=[ 326], 00:20:15.743 | 70.00th=[ 330], 80.00th=[ 334], 90.00th=[ 355], 95.00th=[ 412], 00:20:15.743 | 99.00th=[ 461], 99.50th=[ 519], 99.90th=[ 775], 99.95th=[ 988], 00:20:15.743 | 99.99th=[ 1401] 00:20:15.743 write: IOPS=1379, BW=91.6MiB/s (96.0MB/s)(256MiB/2796msec); 0 zone resets 00:20:15.743 slat (nsec): min=13917, max=54139, avg=16903.08, stdev=3172.63 00:20:15.743 clat (usec): min=288, max=1942, avg=363.99, stdev=63.17 00:20:15.743 lat (usec): min=305, max=1958, avg=380.89, stdev=63.57 00:20:15.743 clat percentiles (usec): 00:20:15.743 | 1.00th=[ 318], 5.00th=[ 322], 10.00th=[ 326], 20.00th=[ 338], 00:20:15.743 | 30.00th=[ 347], 40.00th=[ 351], 50.00th=[ 351], 60.00th=[ 355], 00:20:15.743 | 70.00th=[ 359], 80.00th=[ 371], 90.00th=[ 412], 95.00th=[ 433], 00:20:15.743 | 99.00th=[ 652], 99.50th=[ 701], 99.90th=[ 1156], 99.95th=[ 1336], 00:20:15.743 | 99.99th=[ 1942] 00:20:15.743 bw ( KiB/s): min=91481, max=99416, per=99.79%, avg=93585.80, stdev=3293.20, samples=5 00:20:15.743 iops : min= 1345, max= 1462, avg=1376.20, stdev=48.48, samples=5 00:20:15.743 lat (usec) : 250=0.04%, 500=98.26%, 750=1.52%, 1000=0.12% 00:20:15.743 lat (msec) : 2=0.07% 00:20:15.743 cpu : usr=99.36%, sys=0.00%, ctx=4, majf=0, minf=1169 00:20:15.743 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:15.743 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:15.743 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:15.743 issued rwts: total=3833,3856,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:15.743 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:15.743 00:20:15.743 Run status group 0 (all jobs): 00:20:15.743 READ: bw=90.9MiB/s (95.4MB/s), 90.9MiB/s-90.9MiB/s (95.4MB/s-95.4MB/s), io=255MiB (267MB), run=2799-2799msec 00:20:15.743 WRITE: bw=91.6MiB/s (96.0MB/s), 91.6MiB/s-91.6MiB/s (96.0MB/s-96.0MB/s), io=256MiB (269MB), run=2796-2796msec 00:20:17.641 ----------------------------------------------------- 00:20:17.641 Suppressions used: 00:20:17.641 count bytes template 00:20:17.641 1 5 /usr/src/fio/parse.c 00:20:17.641 1 8 libtcmalloc_minimal.so 00:20:17.641 1 904 libcrypto.so 00:20:17.641 ----------------------------------------------------- 00:20:17.641 00:20:17.641 04:37:13 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify 00:20:17.641 04:37:13 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:17.641 04:37:13 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:20:17.641 04:37:13 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:20:17.641 04:37:13 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-j2 00:20:17.641 04:37:13 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:17.641 04:37:14 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:20:17.641 04:37:14 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:20:17.641 04:37:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:20:17.641 04:37:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:20:17.641 04:37:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:17.641 04:37:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:20:17.641 04:37:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:17.641 04:37:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:20:17.641 04:37:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:20:17.641 04:37:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:20:17.641 04:37:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:20:17.641 04:37:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:17.641 04:37:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:20:17.641 04:37:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:20:17.641 04:37:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:20:17.641 04:37:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:20:17.641 04:37:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:17.641 04:37:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:20:17.641 first_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:20:17.641 second_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:20:17.641 fio-3.35 00:20:17.641 Starting 2 threads 00:20:44.198 00:20:44.198 first_half: (groupid=0, jobs=1): err= 0: pid=75432: Wed Nov 27 04:37:38 2024 00:20:44.198 read: IOPS=2865, BW=11.2MiB/s (11.7MB/s)(255MiB/22766msec) 00:20:44.198 slat (nsec): min=3067, max=121743, avg=4043.00, stdev=984.92 00:20:44.198 clat (usec): min=595, max=335483, avg=33242.77, stdev=20313.01 00:20:44.198 lat (usec): min=600, max=335487, avg=33246.82, stdev=20313.07 00:20:44.198 clat percentiles (msec): 00:20:44.198 | 1.00th=[ 7], 5.00th=[ 27], 10.00th=[ 28], 20.00th=[ 28], 00:20:44.198 | 30.00th=[ 30], 40.00th=[ 30], 50.00th=[ 31], 60.00th=[ 31], 00:20:44.198 | 70.00th=[ 31], 80.00th=[ 33], 90.00th=[ 36], 95.00th=[ 43], 00:20:44.198 | 99.00th=[ 150], 99.50th=[ 174], 99.90th=[ 228], 99.95th=[ 264], 00:20:44.198 | 99.99th=[ 334] 00:20:44.198 write: IOPS=3376, BW=13.2MiB/s (13.8MB/s)(256MiB/19408msec); 0 zone resets 00:20:44.198 slat (usec): min=3, max=337, avg= 6.34, stdev= 3.48 00:20:44.198 clat (usec): min=383, max=157136, avg=11345.07, stdev=19179.64 00:20:44.198 lat (usec): min=394, max=157142, avg=11351.41, stdev=19179.88 00:20:44.198 clat percentiles (usec): 00:20:44.198 | 1.00th=[ 676], 5.00th=[ 873], 10.00th=[ 1037], 20.00th=[ 1369], 00:20:44.198 | 30.00th=[ 2769], 40.00th=[ 4113], 50.00th=[ 5080], 60.00th=[ 5800], 00:20:44.198 | 70.00th=[ 6980], 80.00th=[ 11207], 90.00th=[ 34341], 95.00th=[ 64750], 00:20:44.198 | 99.00th=[ 77071], 99.50th=[ 80217], 99.90th=[145753], 99.95th=[152044], 00:20:44.198 | 99.99th=[156238] 00:20:44.198 bw ( KiB/s): min= 944, max=39976, per=84.38%, avg=22795.13, stdev=12239.59, samples=23 00:20:44.198 iops : min= 236, max= 9994, avg=5698.78, stdev=3059.90, samples=23 00:20:44.198 lat (usec) : 500=0.02%, 750=1.14%, 1000=3.24% 00:20:44.198 lat (msec) : 2=8.80%, 4=6.75%, 10=20.07%, 20=6.17%, 50=47.24% 00:20:44.198 lat (msec) : 100=5.32%, 250=1.21%, 500=0.03% 00:20:44.198 cpu : usr=99.32%, sys=0.08%, ctx=33, majf=0, minf=5575 00:20:44.198 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:20:44.198 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:44.198 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:44.198 issued rwts: total=65239,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:44.198 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:44.198 second_half: (groupid=0, jobs=1): err= 0: pid=75433: Wed Nov 27 04:37:38 2024 00:20:44.198 read: IOPS=2878, BW=11.2MiB/s (11.8MB/s)(255MiB/22645msec) 00:20:44.198 slat (nsec): min=3065, max=50606, avg=3885.72, stdev=1009.05 00:20:44.198 clat (usec): min=649, max=313920, avg=33793.68, stdev=19271.10 00:20:44.198 lat (usec): min=654, max=313924, avg=33797.56, stdev=19271.11 00:20:44.198 clat percentiles (msec): 00:20:44.198 | 1.00th=[ 6], 5.00th=[ 28], 10.00th=[ 28], 20.00th=[ 29], 00:20:44.198 | 30.00th=[ 30], 40.00th=[ 30], 50.00th=[ 31], 60.00th=[ 31], 00:20:44.198 | 70.00th=[ 31], 80.00th=[ 34], 90.00th=[ 36], 95.00th=[ 47], 00:20:44.198 | 99.00th=[ 144], 99.50th=[ 161], 99.90th=[ 203], 99.95th=[ 247], 00:20:44.198 | 99.99th=[ 292] 00:20:44.198 write: IOPS=3690, BW=14.4MiB/s (15.1MB/s)(256MiB/17758msec); 0 zone resets 00:20:44.198 slat (usec): min=3, max=141, avg= 6.31, stdev= 3.22 00:20:44.198 clat (usec): min=327, max=88466, avg=10600.62, stdev=18087.24 00:20:44.198 lat (usec): min=336, max=88471, avg=10606.93, stdev=18087.34 00:20:44.198 clat percentiles (usec): 00:20:44.198 | 1.00th=[ 693], 5.00th=[ 881], 10.00th=[ 1029], 20.00th=[ 1287], 00:20:44.198 | 30.00th=[ 1811], 40.00th=[ 3261], 50.00th=[ 4555], 60.00th=[ 5735], 00:20:44.198 | 70.00th=[ 7570], 80.00th=[10945], 90.00th=[23462], 95.00th=[64750], 00:20:44.198 | 99.00th=[76022], 99.50th=[78119], 99.90th=[83362], 99.95th=[85459], 00:20:44.198 | 99.99th=[87557] 00:20:44.198 bw ( KiB/s): min= 624, max=41856, per=88.23%, avg=23834.77, stdev=12135.58, samples=22 00:20:44.198 iops : min= 156, max=10464, avg=5958.68, stdev=3033.88, samples=22 00:20:44.198 lat (usec) : 500=0.03%, 750=0.92%, 1000=3.56% 00:20:44.198 lat (msec) : 2=11.52%, 4=7.31%, 10=15.82%, 20=6.86%, 50=47.42% 00:20:44.198 lat (msec) : 100=5.40%, 250=1.13%, 500=0.02% 00:20:44.198 cpu : usr=99.36%, sys=0.13%, ctx=30, majf=0, minf=5540 00:20:44.198 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:20:44.198 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:44.198 complete : 0=0.0%, 4=99.9%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:44.198 issued rwts: total=65182,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:44.198 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:44.198 00:20:44.198 Run status group 0 (all jobs): 00:20:44.198 READ: bw=22.4MiB/s (23.5MB/s), 11.2MiB/s-11.2MiB/s (11.7MB/s-11.8MB/s), io=509MiB (534MB), run=22645-22766msec 00:20:44.198 WRITE: bw=26.4MiB/s (27.7MB/s), 13.2MiB/s-14.4MiB/s (13.8MB/s-15.1MB/s), io=512MiB (537MB), run=17758-19408msec 00:20:44.461 ----------------------------------------------------- 00:20:44.461 Suppressions used: 00:20:44.461 count bytes template 00:20:44.461 2 10 /usr/src/fio/parse.c 00:20:44.461 2 192 /usr/src/fio/iolog.c 00:20:44.461 1 8 libtcmalloc_minimal.so 00:20:44.461 1 904 libcrypto.so 00:20:44.461 ----------------------------------------------------- 00:20:44.461 00:20:44.461 04:37:40 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-j2 00:20:44.461 04:37:40 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:44.461 04:37:40 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:20:44.461 04:37:40 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:20:44.461 04:37:40 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-depth128 00:20:44.461 04:37:40 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:44.461 04:37:40 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:20:44.461 04:37:40 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:20:44.461 04:37:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:20:44.461 04:37:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:20:44.461 04:37:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:44.461 04:37:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:20:44.461 04:37:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:44.461 04:37:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:20:44.461 04:37:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:20:44.461 04:37:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:20:44.461 04:37:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:44.461 04:37:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:20:44.461 04:37:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:20:44.461 04:37:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:20:44.461 04:37:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:20:44.461 04:37:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:20:44.461 04:37:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:44.461 04:37:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:20:44.722 test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:20:44.722 fio-3.35 00:20:44.722 Starting 1 thread 00:20:59.634 00:20:59.634 test: (groupid=0, jobs=1): err= 0: pid=75735: Wed Nov 27 04:37:54 2024 00:20:59.634 read: IOPS=8028, BW=31.4MiB/s (32.9MB/s)(255MiB/8121msec) 00:20:59.634 slat (nsec): min=3108, max=22173, avg=3612.21, stdev=663.28 00:20:59.634 clat (usec): min=514, max=36240, avg=15934.17, stdev=1576.35 00:20:59.634 lat (usec): min=518, max=36243, avg=15937.78, stdev=1576.36 00:20:59.634 clat percentiles (usec): 00:20:59.634 | 1.00th=[14615], 5.00th=[14746], 10.00th=[14877], 20.00th=[15139], 00:20:59.634 | 30.00th=[15270], 40.00th=[15401], 50.00th=[15533], 60.00th=[15664], 00:20:59.634 | 70.00th=[15926], 80.00th=[16057], 90.00th=[17695], 95.00th=[19268], 00:20:59.634 | 99.00th=[22152], 99.50th=[23200], 99.90th=[28443], 99.95th=[32375], 00:20:59.634 | 99.99th=[35390] 00:20:59.634 write: IOPS=14.7k, BW=57.3MiB/s (60.1MB/s)(256MiB/4467msec); 0 zone resets 00:20:59.634 slat (usec): min=3, max=968, avg= 5.84, stdev= 5.04 00:20:59.634 clat (usec): min=531, max=50676, avg=8678.40, stdev=11321.42 00:20:59.634 lat (usec): min=536, max=50681, avg=8684.25, stdev=11321.42 00:20:59.634 clat percentiles (usec): 00:20:59.634 | 1.00th=[ 701], 5.00th=[ 857], 10.00th=[ 963], 20.00th=[ 1123], 00:20:59.634 | 30.00th=[ 1319], 40.00th=[ 2147], 50.00th=[ 4948], 60.00th=[ 5997], 00:20:59.634 | 70.00th=[ 7242], 80.00th=[ 8848], 90.00th=[32113], 95.00th=[34341], 00:20:59.634 | 99.00th=[40633], 99.50th=[42206], 99.90th=[45351], 99.95th=[46400], 00:20:59.634 | 99.99th=[49021] 00:20:59.634 bw ( KiB/s): min=45928, max=87528, per=99.27%, avg=58254.22, stdev=14694.56, samples=9 00:20:59.634 iops : min=11482, max=21882, avg=14563.56, stdev=3673.64, samples=9 00:20:59.634 lat (usec) : 750=1.00%, 1000=5.15% 00:20:59.634 lat (msec) : 2=13.64%, 4=1.56%, 10=19.89%, 20=49.28%, 50=9.49% 00:20:59.634 lat (msec) : 100=0.01% 00:20:59.634 cpu : usr=99.09%, sys=0.19%, ctx=26, majf=0, minf=5565 00:20:59.634 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:20:59.634 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:59.634 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:59.634 issued rwts: total=65202,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:59.634 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:59.634 00:20:59.634 Run status group 0 (all jobs): 00:20:59.634 READ: bw=31.4MiB/s (32.9MB/s), 31.4MiB/s-31.4MiB/s (32.9MB/s-32.9MB/s), io=255MiB (267MB), run=8121-8121msec 00:20:59.634 WRITE: bw=57.3MiB/s (60.1MB/s), 57.3MiB/s-57.3MiB/s (60.1MB/s-60.1MB/s), io=256MiB (268MB), run=4467-4467msec 00:21:00.204 ----------------------------------------------------- 00:21:00.204 Suppressions used: 00:21:00.204 count bytes template 00:21:00.204 1 5 /usr/src/fio/parse.c 00:21:00.204 2 192 /usr/src/fio/iolog.c 00:21:00.204 1 8 libtcmalloc_minimal.so 00:21:00.204 1 904 libcrypto.so 00:21:00.204 ----------------------------------------------------- 00:21:00.204 00:21:00.204 04:37:56 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-depth128 00:21:00.204 04:37:56 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:00.204 04:37:56 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:21:00.204 04:37:56 ftl.ftl_fio_basic -- ftl/fio.sh@84 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:21:00.204 04:37:56 ftl.ftl_fio_basic -- ftl/fio.sh@85 -- # remove_shm 00:21:00.204 Remove shared memory files 00:21:00.204 04:37:56 ftl.ftl_fio_basic -- ftl/common.sh@204 -- # echo Remove shared memory files 00:21:00.204 04:37:56 ftl.ftl_fio_basic -- ftl/common.sh@205 -- # rm -f rm -f 00:21:00.204 04:37:56 ftl.ftl_fio_basic -- ftl/common.sh@206 -- # rm -f rm -f 00:21:00.204 04:37:56 ftl.ftl_fio_basic -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid57159 /dev/shm/spdk_tgt_trace.pid74081 00:21:00.205 04:37:56 ftl.ftl_fio_basic -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:21:00.205 04:37:56 ftl.ftl_fio_basic -- ftl/common.sh@209 -- # rm -f rm -f 00:21:00.205 00:21:00.205 real 1m7.102s 00:21:00.205 user 2m30.675s 00:21:00.205 sys 0m2.601s 00:21:00.205 04:37:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:00.205 04:37:56 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:21:00.205 ************************************ 00:21:00.205 END TEST ftl_fio_basic 00:21:00.205 ************************************ 00:21:00.205 04:37:56 ftl -- ftl/ftl.sh@74 -- # run_test ftl_bdevperf /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:21:00.205 04:37:56 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:21:00.205 04:37:56 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:00.205 04:37:56 ftl -- common/autotest_common.sh@10 -- # set +x 00:21:00.205 ************************************ 00:21:00.205 START TEST ftl_bdevperf 00:21:00.205 ************************************ 00:21:00.205 04:37:56 ftl.ftl_bdevperf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:21:00.205 * Looking for test storage... 00:21:00.205 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:21:00.205 04:37:56 ftl.ftl_bdevperf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:00.205 04:37:56 ftl.ftl_bdevperf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:00.205 04:37:56 ftl.ftl_bdevperf -- common/autotest_common.sh@1693 -- # lcov --version 00:21:00.205 04:37:56 ftl.ftl_bdevperf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:00.205 04:37:56 ftl.ftl_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:00.205 04:37:56 ftl.ftl_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:00.205 04:37:56 ftl.ftl_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:00.205 04:37:56 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:21:00.205 04:37:56 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:21:00.205 04:37:56 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:21:00.205 04:37:56 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:21:00.205 04:37:56 ftl.ftl_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:21:00.205 04:37:56 ftl.ftl_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:21:00.205 04:37:56 ftl.ftl_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:21:00.205 04:37:56 ftl.ftl_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:00.205 04:37:56 ftl.ftl_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:21:00.205 04:37:56 ftl.ftl_bdevperf -- scripts/common.sh@345 -- # : 1 00:21:00.205 04:37:56 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:00.205 04:37:56 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:00.205 04:37:56 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:21:00.205 04:37:56 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=1 00:21:00.205 04:37:56 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:00.205 04:37:56 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 1 00:21:00.205 04:37:56 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:21:00.205 04:37:56 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:21:00.205 04:37:56 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=2 00:21:00.205 04:37:56 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:00.205 04:37:56 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 2 00:21:00.205 04:37:56 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:21:00.205 04:37:56 ftl.ftl_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:00.205 04:37:56 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:00.205 04:37:56 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # return 0 00:21:00.205 04:37:56 ftl.ftl_bdevperf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:00.205 04:37:56 ftl.ftl_bdevperf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:00.205 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:00.205 --rc genhtml_branch_coverage=1 00:21:00.205 --rc genhtml_function_coverage=1 00:21:00.205 --rc genhtml_legend=1 00:21:00.205 --rc geninfo_all_blocks=1 00:21:00.205 --rc geninfo_unexecuted_blocks=1 00:21:00.205 00:21:00.205 ' 00:21:00.205 04:37:56 ftl.ftl_bdevperf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:00.205 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:00.205 --rc genhtml_branch_coverage=1 00:21:00.205 --rc genhtml_function_coverage=1 00:21:00.205 --rc genhtml_legend=1 00:21:00.205 --rc geninfo_all_blocks=1 00:21:00.205 --rc geninfo_unexecuted_blocks=1 00:21:00.205 00:21:00.205 ' 00:21:00.205 04:37:56 ftl.ftl_bdevperf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:00.205 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:00.205 --rc genhtml_branch_coverage=1 00:21:00.205 --rc genhtml_function_coverage=1 00:21:00.205 --rc genhtml_legend=1 00:21:00.205 --rc geninfo_all_blocks=1 00:21:00.205 --rc geninfo_unexecuted_blocks=1 00:21:00.205 00:21:00.205 ' 00:21:00.205 04:37:56 ftl.ftl_bdevperf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:00.205 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:00.205 --rc genhtml_branch_coverage=1 00:21:00.205 --rc genhtml_function_coverage=1 00:21:00.205 --rc genhtml_legend=1 00:21:00.205 --rc geninfo_all_blocks=1 00:21:00.205 --rc geninfo_unexecuted_blocks=1 00:21:00.205 00:21:00.205 ' 00:21:00.205 04:37:56 ftl.ftl_bdevperf -- ftl/bdevperf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:21:00.465 04:37:56 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 00:21:00.465 04:37:56 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:21:00.465 04:37:56 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:21:00.465 04:37:56 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:21:00.465 04:37:56 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:21:00.465 04:37:56 ftl.ftl_bdevperf -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:00.465 04:37:56 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:21:00.465 04:37:56 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:21:00.465 04:37:56 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:00.465 04:37:56 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:00.465 04:37:56 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:21:00.465 04:37:56 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:21:00.465 04:37:56 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:21:00.465 04:37:56 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:21:00.465 04:37:56 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:21:00.465 04:37:56 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:21:00.465 04:37:56 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:00.465 04:37:56 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:00.465 04:37:56 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:21:00.465 04:37:56 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:21:00.465 04:37:56 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:21:00.465 04:37:56 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:21:00.465 04:37:56 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:21:00.465 04:37:56 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:21:00.465 04:37:56 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:21:00.465 04:37:56 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # spdk_ini_pid= 00:21:00.465 04:37:56 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:21:00.465 04:37:56 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:21:00.465 04:37:56 ftl.ftl_bdevperf -- ftl/bdevperf.sh@11 -- # device=0000:00:11.0 00:21:00.465 04:37:56 ftl.ftl_bdevperf -- ftl/bdevperf.sh@12 -- # cache_device=0000:00:10.0 00:21:00.465 04:37:56 ftl.ftl_bdevperf -- ftl/bdevperf.sh@13 -- # use_append= 00:21:00.465 04:37:56 ftl.ftl_bdevperf -- ftl/bdevperf.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:00.465 04:37:56 ftl.ftl_bdevperf -- ftl/bdevperf.sh@15 -- # timeout=240 00:21:00.465 04:37:56 ftl.ftl_bdevperf -- ftl/bdevperf.sh@18 -- # bdevperf_pid=75962 00:21:00.465 04:37:56 ftl.ftl_bdevperf -- ftl/bdevperf.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0 00:21:00.465 04:37:56 ftl.ftl_bdevperf -- ftl/bdevperf.sh@20 -- # trap 'killprocess $bdevperf_pid; exit 1' SIGINT SIGTERM EXIT 00:21:00.465 04:37:56 ftl.ftl_bdevperf -- ftl/bdevperf.sh@21 -- # waitforlisten 75962 00:21:00.465 04:37:56 ftl.ftl_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 75962 ']' 00:21:00.465 04:37:56 ftl.ftl_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:00.465 04:37:56 ftl.ftl_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:00.465 04:37:56 ftl.ftl_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:00.465 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:00.465 04:37:56 ftl.ftl_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:00.465 04:37:56 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:21:00.465 [2024-11-27 04:37:56.868040] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:21:00.465 [2024-11-27 04:37:56.868173] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75962 ] 00:21:00.465 [2024-11-27 04:37:57.026267] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:00.727 [2024-11-27 04:37:57.190668] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:01.299 04:37:57 ftl.ftl_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:01.299 04:37:57 ftl.ftl_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:21:01.299 04:37:57 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:21:01.299 04:37:57 ftl.ftl_bdevperf -- ftl/common.sh@54 -- # local name=nvme0 00:21:01.299 04:37:57 ftl.ftl_bdevperf -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:21:01.299 04:37:57 ftl.ftl_bdevperf -- ftl/common.sh@56 -- # local size=103424 00:21:01.299 04:37:57 ftl.ftl_bdevperf -- ftl/common.sh@59 -- # local base_bdev 00:21:01.299 04:37:57 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:21:01.559 04:37:58 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:21:01.559 04:37:58 ftl.ftl_bdevperf -- ftl/common.sh@62 -- # local base_size 00:21:01.559 04:37:58 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:21:01.559 04:37:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:21:01.559 04:37:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:21:01.559 04:37:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:21:01.559 04:37:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:21:01.559 04:37:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:21:01.820 04:37:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:21:01.820 { 00:21:01.820 "name": "nvme0n1", 00:21:01.820 "aliases": [ 00:21:01.820 "a59c80da-5159-4095-94d5-cd2d6b44d1c2" 00:21:01.820 ], 00:21:01.820 "product_name": "NVMe disk", 00:21:01.820 "block_size": 4096, 00:21:01.820 "num_blocks": 1310720, 00:21:01.820 "uuid": "a59c80da-5159-4095-94d5-cd2d6b44d1c2", 00:21:01.820 "numa_id": -1, 00:21:01.820 "assigned_rate_limits": { 00:21:01.820 "rw_ios_per_sec": 0, 00:21:01.820 "rw_mbytes_per_sec": 0, 00:21:01.820 "r_mbytes_per_sec": 0, 00:21:01.820 "w_mbytes_per_sec": 0 00:21:01.820 }, 00:21:01.820 "claimed": true, 00:21:01.820 "claim_type": "read_many_write_one", 00:21:01.820 "zoned": false, 00:21:01.820 "supported_io_types": { 00:21:01.820 "read": true, 00:21:01.820 "write": true, 00:21:01.820 "unmap": true, 00:21:01.820 "flush": true, 00:21:01.820 "reset": true, 00:21:01.820 "nvme_admin": true, 00:21:01.820 "nvme_io": true, 00:21:01.820 "nvme_io_md": false, 00:21:01.820 "write_zeroes": true, 00:21:01.820 "zcopy": false, 00:21:01.820 "get_zone_info": false, 00:21:01.820 "zone_management": false, 00:21:01.820 "zone_append": false, 00:21:01.820 "compare": true, 00:21:01.820 "compare_and_write": false, 00:21:01.820 "abort": true, 00:21:01.820 "seek_hole": false, 00:21:01.820 "seek_data": false, 00:21:01.820 "copy": true, 00:21:01.820 "nvme_iov_md": false 00:21:01.820 }, 00:21:01.820 "driver_specific": { 00:21:01.820 "nvme": [ 00:21:01.820 { 00:21:01.820 "pci_address": "0000:00:11.0", 00:21:01.820 "trid": { 00:21:01.820 "trtype": "PCIe", 00:21:01.820 "traddr": "0000:00:11.0" 00:21:01.820 }, 00:21:01.820 "ctrlr_data": { 00:21:01.820 "cntlid": 0, 00:21:01.820 "vendor_id": "0x1b36", 00:21:01.820 "model_number": "QEMU NVMe Ctrl", 00:21:01.820 "serial_number": "12341", 00:21:01.820 "firmware_revision": "8.0.0", 00:21:01.820 "subnqn": "nqn.2019-08.org.qemu:12341", 00:21:01.820 "oacs": { 00:21:01.820 "security": 0, 00:21:01.820 "format": 1, 00:21:01.820 "firmware": 0, 00:21:01.820 "ns_manage": 1 00:21:01.820 }, 00:21:01.820 "multi_ctrlr": false, 00:21:01.820 "ana_reporting": false 00:21:01.820 }, 00:21:01.820 "vs": { 00:21:01.820 "nvme_version": "1.4" 00:21:01.820 }, 00:21:01.820 "ns_data": { 00:21:01.820 "id": 1, 00:21:01.820 "can_share": false 00:21:01.820 } 00:21:01.820 } 00:21:01.820 ], 00:21:01.820 "mp_policy": "active_passive" 00:21:01.820 } 00:21:01.820 } 00:21:01.820 ]' 00:21:01.820 04:37:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:21:01.820 04:37:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:21:01.820 04:37:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:21:02.081 04:37:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=1310720 00:21:02.081 04:37:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:21:02.081 04:37:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 5120 00:21:02.081 04:37:58 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # base_size=5120 00:21:02.081 04:37:58 ftl.ftl_bdevperf -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:21:02.081 04:37:58 ftl.ftl_bdevperf -- ftl/common.sh@67 -- # clear_lvols 00:21:02.081 04:37:58 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:21:02.081 04:37:58 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:21:02.081 04:37:58 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # stores=288f3427-e058-49a5-b162-289f4c63740f 00:21:02.081 04:37:58 ftl.ftl_bdevperf -- ftl/common.sh@29 -- # for lvs in $stores 00:21:02.081 04:37:58 ftl.ftl_bdevperf -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 288f3427-e058-49a5-b162-289f4c63740f 00:21:02.342 04:37:58 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:21:02.604 04:37:59 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # lvs=8c74b026-1dcc-4dde-a246-309ad2ae0ee9 00:21:02.604 04:37:59 ftl.ftl_bdevperf -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 8c74b026-1dcc-4dde-a246-309ad2ae0ee9 00:21:02.863 04:37:59 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # split_bdev=31f3c73e-d98d-45f1-8760-a8d71c1fe440 00:21:02.863 04:37:59 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # create_nv_cache_bdev nvc0 0000:00:10.0 31f3c73e-d98d-45f1-8760-a8d71c1fe440 00:21:02.863 04:37:59 ftl.ftl_bdevperf -- ftl/common.sh@35 -- # local name=nvc0 00:21:02.863 04:37:59 ftl.ftl_bdevperf -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:21:02.863 04:37:59 ftl.ftl_bdevperf -- ftl/common.sh@37 -- # local base_bdev=31f3c73e-d98d-45f1-8760-a8d71c1fe440 00:21:02.863 04:37:59 ftl.ftl_bdevperf -- ftl/common.sh@38 -- # local cache_size= 00:21:02.863 04:37:59 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # get_bdev_size 31f3c73e-d98d-45f1-8760-a8d71c1fe440 00:21:02.863 04:37:59 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=31f3c73e-d98d-45f1-8760-a8d71c1fe440 00:21:02.863 04:37:59 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:21:02.863 04:37:59 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:21:02.863 04:37:59 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:21:02.863 04:37:59 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 31f3c73e-d98d-45f1-8760-a8d71c1fe440 00:21:03.125 04:37:59 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:21:03.125 { 00:21:03.125 "name": "31f3c73e-d98d-45f1-8760-a8d71c1fe440", 00:21:03.125 "aliases": [ 00:21:03.125 "lvs/nvme0n1p0" 00:21:03.125 ], 00:21:03.125 "product_name": "Logical Volume", 00:21:03.125 "block_size": 4096, 00:21:03.125 "num_blocks": 26476544, 00:21:03.125 "uuid": "31f3c73e-d98d-45f1-8760-a8d71c1fe440", 00:21:03.125 "assigned_rate_limits": { 00:21:03.125 "rw_ios_per_sec": 0, 00:21:03.125 "rw_mbytes_per_sec": 0, 00:21:03.125 "r_mbytes_per_sec": 0, 00:21:03.125 "w_mbytes_per_sec": 0 00:21:03.125 }, 00:21:03.125 "claimed": false, 00:21:03.125 "zoned": false, 00:21:03.125 "supported_io_types": { 00:21:03.125 "read": true, 00:21:03.125 "write": true, 00:21:03.125 "unmap": true, 00:21:03.125 "flush": false, 00:21:03.125 "reset": true, 00:21:03.125 "nvme_admin": false, 00:21:03.125 "nvme_io": false, 00:21:03.125 "nvme_io_md": false, 00:21:03.125 "write_zeroes": true, 00:21:03.125 "zcopy": false, 00:21:03.125 "get_zone_info": false, 00:21:03.125 "zone_management": false, 00:21:03.125 "zone_append": false, 00:21:03.125 "compare": false, 00:21:03.125 "compare_and_write": false, 00:21:03.125 "abort": false, 00:21:03.125 "seek_hole": true, 00:21:03.125 "seek_data": true, 00:21:03.125 "copy": false, 00:21:03.125 "nvme_iov_md": false 00:21:03.125 }, 00:21:03.125 "driver_specific": { 00:21:03.125 "lvol": { 00:21:03.125 "lvol_store_uuid": "8c74b026-1dcc-4dde-a246-309ad2ae0ee9", 00:21:03.125 "base_bdev": "nvme0n1", 00:21:03.125 "thin_provision": true, 00:21:03.125 "num_allocated_clusters": 0, 00:21:03.125 "snapshot": false, 00:21:03.125 "clone": false, 00:21:03.125 "esnap_clone": false 00:21:03.125 } 00:21:03.125 } 00:21:03.125 } 00:21:03.125 ]' 00:21:03.125 04:37:59 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:21:03.125 04:37:59 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:21:03.125 04:37:59 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:21:03.125 04:37:59 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:21:03.125 04:37:59 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:21:03.125 04:37:59 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:21:03.125 04:37:59 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # local base_size=5171 00:21:03.125 04:37:59 ftl.ftl_bdevperf -- ftl/common.sh@44 -- # local nvc_bdev 00:21:03.125 04:37:59 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:21:03.386 04:37:59 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:21:03.386 04:37:59 ftl.ftl_bdevperf -- ftl/common.sh@47 -- # [[ -z '' ]] 00:21:03.386 04:37:59 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # get_bdev_size 31f3c73e-d98d-45f1-8760-a8d71c1fe440 00:21:03.386 04:37:59 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=31f3c73e-d98d-45f1-8760-a8d71c1fe440 00:21:03.386 04:37:59 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:21:03.386 04:37:59 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:21:03.386 04:37:59 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:21:03.386 04:37:59 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 31f3c73e-d98d-45f1-8760-a8d71c1fe440 00:21:03.647 04:38:00 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:21:03.647 { 00:21:03.647 "name": "31f3c73e-d98d-45f1-8760-a8d71c1fe440", 00:21:03.647 "aliases": [ 00:21:03.647 "lvs/nvme0n1p0" 00:21:03.647 ], 00:21:03.647 "product_name": "Logical Volume", 00:21:03.647 "block_size": 4096, 00:21:03.647 "num_blocks": 26476544, 00:21:03.647 "uuid": "31f3c73e-d98d-45f1-8760-a8d71c1fe440", 00:21:03.647 "assigned_rate_limits": { 00:21:03.647 "rw_ios_per_sec": 0, 00:21:03.647 "rw_mbytes_per_sec": 0, 00:21:03.647 "r_mbytes_per_sec": 0, 00:21:03.647 "w_mbytes_per_sec": 0 00:21:03.647 }, 00:21:03.648 "claimed": false, 00:21:03.648 "zoned": false, 00:21:03.648 "supported_io_types": { 00:21:03.648 "read": true, 00:21:03.648 "write": true, 00:21:03.648 "unmap": true, 00:21:03.648 "flush": false, 00:21:03.648 "reset": true, 00:21:03.648 "nvme_admin": false, 00:21:03.648 "nvme_io": false, 00:21:03.648 "nvme_io_md": false, 00:21:03.648 "write_zeroes": true, 00:21:03.648 "zcopy": false, 00:21:03.648 "get_zone_info": false, 00:21:03.648 "zone_management": false, 00:21:03.648 "zone_append": false, 00:21:03.648 "compare": false, 00:21:03.648 "compare_and_write": false, 00:21:03.648 "abort": false, 00:21:03.648 "seek_hole": true, 00:21:03.648 "seek_data": true, 00:21:03.648 "copy": false, 00:21:03.648 "nvme_iov_md": false 00:21:03.648 }, 00:21:03.648 "driver_specific": { 00:21:03.648 "lvol": { 00:21:03.648 "lvol_store_uuid": "8c74b026-1dcc-4dde-a246-309ad2ae0ee9", 00:21:03.648 "base_bdev": "nvme0n1", 00:21:03.648 "thin_provision": true, 00:21:03.648 "num_allocated_clusters": 0, 00:21:03.648 "snapshot": false, 00:21:03.648 "clone": false, 00:21:03.648 "esnap_clone": false 00:21:03.648 } 00:21:03.648 } 00:21:03.648 } 00:21:03.648 ]' 00:21:03.648 04:38:00 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:21:03.648 04:38:00 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:21:03.648 04:38:00 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:21:03.648 04:38:00 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:21:03.648 04:38:00 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:21:03.648 04:38:00 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:21:03.648 04:38:00 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # cache_size=5171 00:21:03.648 04:38:00 ftl.ftl_bdevperf -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:21:03.909 04:38:00 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # nv_cache=nvc0n1p0 00:21:03.909 04:38:00 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # get_bdev_size 31f3c73e-d98d-45f1-8760-a8d71c1fe440 00:21:03.909 04:38:00 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=31f3c73e-d98d-45f1-8760-a8d71c1fe440 00:21:03.909 04:38:00 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:21:03.909 04:38:00 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:21:03.909 04:38:00 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:21:03.909 04:38:00 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 31f3c73e-d98d-45f1-8760-a8d71c1fe440 00:21:03.909 04:38:00 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:21:03.909 { 00:21:03.909 "name": "31f3c73e-d98d-45f1-8760-a8d71c1fe440", 00:21:03.909 "aliases": [ 00:21:03.909 "lvs/nvme0n1p0" 00:21:03.909 ], 00:21:03.909 "product_name": "Logical Volume", 00:21:03.909 "block_size": 4096, 00:21:03.909 "num_blocks": 26476544, 00:21:03.909 "uuid": "31f3c73e-d98d-45f1-8760-a8d71c1fe440", 00:21:03.909 "assigned_rate_limits": { 00:21:03.909 "rw_ios_per_sec": 0, 00:21:03.909 "rw_mbytes_per_sec": 0, 00:21:03.909 "r_mbytes_per_sec": 0, 00:21:03.909 "w_mbytes_per_sec": 0 00:21:03.909 }, 00:21:03.909 "claimed": false, 00:21:03.909 "zoned": false, 00:21:03.909 "supported_io_types": { 00:21:03.909 "read": true, 00:21:03.909 "write": true, 00:21:03.909 "unmap": true, 00:21:03.909 "flush": false, 00:21:03.909 "reset": true, 00:21:03.909 "nvme_admin": false, 00:21:03.909 "nvme_io": false, 00:21:03.909 "nvme_io_md": false, 00:21:03.909 "write_zeroes": true, 00:21:03.909 "zcopy": false, 00:21:03.909 "get_zone_info": false, 00:21:03.909 "zone_management": false, 00:21:03.909 "zone_append": false, 00:21:03.909 "compare": false, 00:21:03.909 "compare_and_write": false, 00:21:03.909 "abort": false, 00:21:03.909 "seek_hole": true, 00:21:03.909 "seek_data": true, 00:21:03.909 "copy": false, 00:21:03.909 "nvme_iov_md": false 00:21:03.909 }, 00:21:03.909 "driver_specific": { 00:21:03.909 "lvol": { 00:21:03.909 "lvol_store_uuid": "8c74b026-1dcc-4dde-a246-309ad2ae0ee9", 00:21:03.909 "base_bdev": "nvme0n1", 00:21:03.909 "thin_provision": true, 00:21:03.909 "num_allocated_clusters": 0, 00:21:03.909 "snapshot": false, 00:21:03.909 "clone": false, 00:21:03.909 "esnap_clone": false 00:21:03.909 } 00:21:03.909 } 00:21:03.909 } 00:21:03.909 ]' 00:21:03.909 04:38:00 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:21:04.171 04:38:00 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:21:04.171 04:38:00 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:21:04.171 04:38:00 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:21:04.171 04:38:00 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:21:04.171 04:38:00 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:21:04.171 04:38:00 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # l2p_dram_size_mb=20 00:21:04.171 04:38:00 ftl.ftl_bdevperf -- ftl/bdevperf.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 31f3c73e-d98d-45f1-8760-a8d71c1fe440 -c nvc0n1p0 --l2p_dram_limit 20 00:21:04.171 [2024-11-27 04:38:00.727675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.171 [2024-11-27 04:38:00.727748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:04.171 [2024-11-27 04:38:00.727762] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:21:04.172 [2024-11-27 04:38:00.727772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.172 [2024-11-27 04:38:00.727824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.172 [2024-11-27 04:38:00.727835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:04.172 [2024-11-27 04:38:00.727843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:21:04.172 [2024-11-27 04:38:00.727853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.172 [2024-11-27 04:38:00.727870] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:04.172 [2024-11-27 04:38:00.728552] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:04.172 [2024-11-27 04:38:00.728578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.172 [2024-11-27 04:38:00.728587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:04.172 [2024-11-27 04:38:00.728596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.713 ms 00:21:04.172 [2024-11-27 04:38:00.728605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.172 [2024-11-27 04:38:00.728741] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 22594052-4043-486c-be41-3d6626378139 00:21:04.172 [2024-11-27 04:38:00.730224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.172 [2024-11-27 04:38:00.730256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:21:04.172 [2024-11-27 04:38:00.730272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:21:04.172 [2024-11-27 04:38:00.730280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.172 [2024-11-27 04:38:00.735571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.172 [2024-11-27 04:38:00.735602] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:04.172 [2024-11-27 04:38:00.735614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.255 ms 00:21:04.172 [2024-11-27 04:38:00.735624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.172 [2024-11-27 04:38:00.735707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.172 [2024-11-27 04:38:00.735721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:04.172 [2024-11-27 04:38:00.735750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:21:04.172 [2024-11-27 04:38:00.735757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.172 [2024-11-27 04:38:00.735808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.172 [2024-11-27 04:38:00.735817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:04.172 [2024-11-27 04:38:00.735831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:21:04.172 [2024-11-27 04:38:00.735839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.172 [2024-11-27 04:38:00.735862] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:04.172 [2024-11-27 04:38:00.739396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.172 [2024-11-27 04:38:00.739426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:04.172 [2024-11-27 04:38:00.739435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.542 ms 00:21:04.172 [2024-11-27 04:38:00.739448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.172 [2024-11-27 04:38:00.739475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.172 [2024-11-27 04:38:00.739485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:04.172 [2024-11-27 04:38:00.739493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:21:04.172 [2024-11-27 04:38:00.739502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.172 [2024-11-27 04:38:00.739517] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:21:04.172 [2024-11-27 04:38:00.739656] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:21:04.172 [2024-11-27 04:38:00.739668] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:04.172 [2024-11-27 04:38:00.739679] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:21:04.172 [2024-11-27 04:38:00.739689] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:04.172 [2024-11-27 04:38:00.739700] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:04.172 [2024-11-27 04:38:00.739708] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:21:04.172 [2024-11-27 04:38:00.739716] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:04.172 [2024-11-27 04:38:00.739745] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:21:04.172 [2024-11-27 04:38:00.739755] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:21:04.172 [2024-11-27 04:38:00.739764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.172 [2024-11-27 04:38:00.739772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:04.172 [2024-11-27 04:38:00.739781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.248 ms 00:21:04.172 [2024-11-27 04:38:00.739790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.172 [2024-11-27 04:38:00.739874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.172 [2024-11-27 04:38:00.739886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:04.172 [2024-11-27 04:38:00.739893] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:21:04.172 [2024-11-27 04:38:00.739903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.172 [2024-11-27 04:38:00.740007] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:04.172 [2024-11-27 04:38:00.740020] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:04.172 [2024-11-27 04:38:00.740028] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:04.172 [2024-11-27 04:38:00.740037] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:04.172 [2024-11-27 04:38:00.740045] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:04.172 [2024-11-27 04:38:00.740053] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:04.172 [2024-11-27 04:38:00.740060] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:21:04.172 [2024-11-27 04:38:00.740069] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:04.172 [2024-11-27 04:38:00.740076] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:21:04.172 [2024-11-27 04:38:00.740085] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:04.172 [2024-11-27 04:38:00.740091] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:04.172 [2024-11-27 04:38:00.740105] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:21:04.172 [2024-11-27 04:38:00.740112] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:04.172 [2024-11-27 04:38:00.740120] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:04.172 [2024-11-27 04:38:00.740127] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:21:04.172 [2024-11-27 04:38:00.740136] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:04.172 [2024-11-27 04:38:00.740142] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:04.172 [2024-11-27 04:38:00.740150] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:21:04.172 [2024-11-27 04:38:00.740157] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:04.172 [2024-11-27 04:38:00.740168] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:04.172 [2024-11-27 04:38:00.740176] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:21:04.172 [2024-11-27 04:38:00.740184] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:04.172 [2024-11-27 04:38:00.740191] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:04.172 [2024-11-27 04:38:00.740199] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:21:04.172 [2024-11-27 04:38:00.740206] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:04.172 [2024-11-27 04:38:00.740214] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:04.172 [2024-11-27 04:38:00.740220] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:21:04.172 [2024-11-27 04:38:00.740228] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:04.172 [2024-11-27 04:38:00.740235] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:04.172 [2024-11-27 04:38:00.740243] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:21:04.172 [2024-11-27 04:38:00.740249] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:04.172 [2024-11-27 04:38:00.740259] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:04.172 [2024-11-27 04:38:00.740266] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:21:04.172 [2024-11-27 04:38:00.740273] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:04.172 [2024-11-27 04:38:00.740280] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:04.172 [2024-11-27 04:38:00.740288] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:21:04.172 [2024-11-27 04:38:00.740294] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:04.172 [2024-11-27 04:38:00.740302] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:21:04.172 [2024-11-27 04:38:00.740309] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:21:04.172 [2024-11-27 04:38:00.740317] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:04.172 [2024-11-27 04:38:00.740323] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:21:04.172 [2024-11-27 04:38:00.740331] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:21:04.172 [2024-11-27 04:38:00.740338] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:04.172 [2024-11-27 04:38:00.740346] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:04.172 [2024-11-27 04:38:00.740353] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:04.173 [2024-11-27 04:38:00.740362] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:04.173 [2024-11-27 04:38:00.740369] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:04.173 [2024-11-27 04:38:00.740380] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:04.173 [2024-11-27 04:38:00.740387] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:04.173 [2024-11-27 04:38:00.740395] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:04.173 [2024-11-27 04:38:00.740401] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:04.173 [2024-11-27 04:38:00.740411] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:04.173 [2024-11-27 04:38:00.740418] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:04.173 [2024-11-27 04:38:00.740441] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:04.173 [2024-11-27 04:38:00.740450] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:04.173 [2024-11-27 04:38:00.740460] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:21:04.173 [2024-11-27 04:38:00.740468] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:21:04.173 [2024-11-27 04:38:00.740476] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:21:04.173 [2024-11-27 04:38:00.740483] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:21:04.173 [2024-11-27 04:38:00.740492] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:21:04.173 [2024-11-27 04:38:00.740499] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:21:04.173 [2024-11-27 04:38:00.740507] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:21:04.173 [2024-11-27 04:38:00.740514] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:21:04.173 [2024-11-27 04:38:00.740524] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:21:04.173 [2024-11-27 04:38:00.740531] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:21:04.173 [2024-11-27 04:38:00.740540] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:21:04.173 [2024-11-27 04:38:00.740547] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:21:04.173 [2024-11-27 04:38:00.740555] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:21:04.173 [2024-11-27 04:38:00.740562] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:21:04.173 [2024-11-27 04:38:00.740570] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:04.173 [2024-11-27 04:38:00.740578] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:04.173 [2024-11-27 04:38:00.740592] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:04.173 [2024-11-27 04:38:00.740599] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:04.173 [2024-11-27 04:38:00.740608] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:04.173 [2024-11-27 04:38:00.740616] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:04.173 [2024-11-27 04:38:00.740625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.173 [2024-11-27 04:38:00.740632] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:04.173 [2024-11-27 04:38:00.740640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.684 ms 00:21:04.173 [2024-11-27 04:38:00.740647] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.173 [2024-11-27 04:38:00.740681] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:21:04.173 [2024-11-27 04:38:00.740691] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:21:06.718 [2024-11-27 04:38:02.673737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:06.718 [2024-11-27 04:38:02.673798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:21:06.718 [2024-11-27 04:38:02.673812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1933.035 ms 00:21:06.718 [2024-11-27 04:38:02.673819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.718 [2024-11-27 04:38:02.695823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:06.718 [2024-11-27 04:38:02.695870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:06.718 [2024-11-27 04:38:02.695888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.812 ms 00:21:06.718 [2024-11-27 04:38:02.695898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.718 [2024-11-27 04:38:02.696031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:06.718 [2024-11-27 04:38:02.696041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:06.718 [2024-11-27 04:38:02.696053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:21:06.718 [2024-11-27 04:38:02.696059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.718 [2024-11-27 04:38:02.739047] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:06.718 [2024-11-27 04:38:02.739193] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:06.718 [2024-11-27 04:38:02.739211] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.954 ms 00:21:06.718 [2024-11-27 04:38:02.739219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.718 [2024-11-27 04:38:02.739267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:06.718 [2024-11-27 04:38:02.739276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:06.718 [2024-11-27 04:38:02.739286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:21:06.718 [2024-11-27 04:38:02.739293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.719 [2024-11-27 04:38:02.739645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:06.719 [2024-11-27 04:38:02.739687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:06.719 [2024-11-27 04:38:02.739696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.284 ms 00:21:06.719 [2024-11-27 04:38:02.739702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.719 [2024-11-27 04:38:02.739827] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:06.719 [2024-11-27 04:38:02.739836] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:06.719 [2024-11-27 04:38:02.739845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.107 ms 00:21:06.719 [2024-11-27 04:38:02.739853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.719 [2024-11-27 04:38:02.750697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:06.719 [2024-11-27 04:38:02.750835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:06.719 [2024-11-27 04:38:02.750853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.830 ms 00:21:06.719 [2024-11-27 04:38:02.750863] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.719 [2024-11-27 04:38:02.759897] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 19 (of 20) MiB 00:21:06.719 [2024-11-27 04:38:02.764415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:06.719 [2024-11-27 04:38:02.764446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:06.719 [2024-11-27 04:38:02.764456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.483 ms 00:21:06.719 [2024-11-27 04:38:02.764464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.719 [2024-11-27 04:38:02.824928] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:06.719 [2024-11-27 04:38:02.825118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:21:06.719 [2024-11-27 04:38:02.825139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 60.434 ms 00:21:06.719 [2024-11-27 04:38:02.825150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.719 [2024-11-27 04:38:02.825347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:06.719 [2024-11-27 04:38:02.825363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:06.719 [2024-11-27 04:38:02.825375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.141 ms 00:21:06.719 [2024-11-27 04:38:02.825385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.719 [2024-11-27 04:38:02.848486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:06.719 [2024-11-27 04:38:02.848678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:21:06.719 [2024-11-27 04:38:02.848697] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.045 ms 00:21:06.719 [2024-11-27 04:38:02.848708] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.719 [2024-11-27 04:38:02.871423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:06.719 [2024-11-27 04:38:02.871599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:21:06.719 [2024-11-27 04:38:02.871617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.658 ms 00:21:06.719 [2024-11-27 04:38:02.871627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.719 [2024-11-27 04:38:02.872214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:06.719 [2024-11-27 04:38:02.872234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:06.719 [2024-11-27 04:38:02.872243] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.550 ms 00:21:06.719 [2024-11-27 04:38:02.872252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.719 [2024-11-27 04:38:02.938648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:06.719 [2024-11-27 04:38:02.938883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:21:06.719 [2024-11-27 04:38:02.938903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 66.353 ms 00:21:06.719 [2024-11-27 04:38:02.938914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.719 [2024-11-27 04:38:02.963660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:06.719 [2024-11-27 04:38:02.963742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:21:06.719 [2024-11-27 04:38:02.963756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.675 ms 00:21:06.719 [2024-11-27 04:38:02.963765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.719 [2024-11-27 04:38:02.988097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:06.719 [2024-11-27 04:38:02.988156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:21:06.719 [2024-11-27 04:38:02.988169] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.283 ms 00:21:06.719 [2024-11-27 04:38:02.988178] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.719 [2024-11-27 04:38:03.012236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:06.719 [2024-11-27 04:38:03.012291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:06.719 [2024-11-27 04:38:03.012304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.015 ms 00:21:06.719 [2024-11-27 04:38:03.012314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.719 [2024-11-27 04:38:03.012357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:06.719 [2024-11-27 04:38:03.012370] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:06.719 [2024-11-27 04:38:03.012378] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:21:06.719 [2024-11-27 04:38:03.012388] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.719 [2024-11-27 04:38:03.012469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:06.719 [2024-11-27 04:38:03.012481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:06.719 [2024-11-27 04:38:03.012489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:21:06.719 [2024-11-27 04:38:03.012500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.719 [2024-11-27 04:38:03.013391] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2285.312 ms, result 0 00:21:06.719 { 00:21:06.719 "name": "ftl0", 00:21:06.719 "uuid": "22594052-4043-486c-be41-3d6626378139" 00:21:06.719 } 00:21:06.719 04:38:03 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_stats -b ftl0 00:21:06.719 04:38:03 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # jq -r .name 00:21:06.719 04:38:03 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # grep -qw ftl0 00:21:06.719 04:38:03 ftl.ftl_bdevperf -- ftl/bdevperf.sh@30 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 1 -w randwrite -t 4 -o 69632 00:21:06.981 [2024-11-27 04:38:03.389757] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:21:06.981 I/O size of 69632 is greater than zero copy threshold (65536). 00:21:06.981 Zero copy mechanism will not be used. 00:21:06.981 Running I/O for 4 seconds... 00:21:08.900 2518.00 IOPS, 167.21 MiB/s [2024-11-27T04:38:06.424Z] 2614.00 IOPS, 173.59 MiB/s [2024-11-27T04:38:07.804Z] 2719.00 IOPS, 180.56 MiB/s [2024-11-27T04:38:07.804Z] 2680.00 IOPS, 177.97 MiB/s 00:21:11.217 Latency(us) 00:21:11.217 [2024-11-27T04:38:07.804Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:11.217 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 1, IO size: 69632) 00:21:11.217 ftl0 : 4.00 2679.03 177.90 0.00 0.00 391.31 166.20 2722.26 00:21:11.217 [2024-11-27T04:38:07.804Z] =================================================================================================================== 00:21:11.217 [2024-11-27T04:38:07.804Z] Total : 2679.03 177.90 0.00 0.00 391.31 166.20 2722.26 00:21:11.218 [2024-11-27 04:38:07.400195] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:21:11.218 { 00:21:11.218 "results": [ 00:21:11.218 { 00:21:11.218 "job": "ftl0", 00:21:11.218 "core_mask": "0x1", 00:21:11.218 "workload": "randwrite", 00:21:11.218 "status": "finished", 00:21:11.218 "queue_depth": 1, 00:21:11.218 "io_size": 69632, 00:21:11.218 "runtime": 4.001823, 00:21:11.218 "iops": 2679.0290325184296, 00:21:11.218 "mibps": 177.90427169067698, 00:21:11.218 "io_failed": 0, 00:21:11.218 "io_timeout": 0, 00:21:11.218 "avg_latency_us": 391.3102342634514, 00:21:11.218 "min_latency_us": 166.20307692307694, 00:21:11.218 "max_latency_us": 2722.2646153846154 00:21:11.218 } 00:21:11.218 ], 00:21:11.218 "core_count": 1 00:21:11.218 } 00:21:11.218 04:38:07 ftl.ftl_bdevperf -- ftl/bdevperf.sh@31 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w randwrite -t 4 -o 4096 00:21:11.218 [2024-11-27 04:38:07.515216] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:21:11.218 Running I/O for 4 seconds... 00:21:13.113 11405.00 IOPS, 44.55 MiB/s [2024-11-27T04:38:10.641Z] 10860.50 IOPS, 42.42 MiB/s [2024-11-27T04:38:11.651Z] 11020.67 IOPS, 43.05 MiB/s [2024-11-27T04:38:11.651Z] 10673.25 IOPS, 41.69 MiB/s 00:21:15.064 Latency(us) 00:21:15.064 [2024-11-27T04:38:11.651Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:15.064 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 128, IO size: 4096) 00:21:15.064 ftl0 : 4.02 10647.77 41.59 0.00 0.00 11988.68 230.01 30449.03 00:21:15.064 [2024-11-27T04:38:11.651Z] =================================================================================================================== 00:21:15.064 [2024-11-27T04:38:11.651Z] Total : 10647.77 41.59 0.00 0.00 11988.68 0.00 30449.03 00:21:15.064 [2024-11-27 04:38:11.545935] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:21:15.064 { 00:21:15.064 "results": [ 00:21:15.064 { 00:21:15.065 "job": "ftl0", 00:21:15.065 "core_mask": "0x1", 00:21:15.065 "workload": "randwrite", 00:21:15.065 "status": "finished", 00:21:15.065 "queue_depth": 128, 00:21:15.065 "io_size": 4096, 00:21:15.065 "runtime": 4.021405, 00:21:15.065 "iops": 10647.77111482181, 00:21:15.065 "mibps": 41.59285591727269, 00:21:15.065 "io_failed": 0, 00:21:15.065 "io_timeout": 0, 00:21:15.065 "avg_latency_us": 11988.676499972154, 00:21:15.065 "min_latency_us": 230.00615384615384, 00:21:15.065 "max_latency_us": 30449.033846153845 00:21:15.065 } 00:21:15.065 ], 00:21:15.065 "core_count": 1 00:21:15.065 } 00:21:15.065 04:38:11 ftl.ftl_bdevperf -- ftl/bdevperf.sh@32 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w verify -t 4 -o 4096 00:21:15.065 [2024-11-27 04:38:11.636921] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:21:15.065 Running I/O for 4 seconds... 00:21:17.411 8806.00 IOPS, 34.40 MiB/s [2024-11-27T04:38:14.938Z] 8673.00 IOPS, 33.88 MiB/s [2024-11-27T04:38:15.971Z] 8650.33 IOPS, 33.79 MiB/s [2024-11-27T04:38:15.971Z] 8513.25 IOPS, 33.25 MiB/s 00:21:19.384 Latency(us) 00:21:19.384 [2024-11-27T04:38:15.971Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:19.384 Job: ftl0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:21:19.384 Verification LBA range: start 0x0 length 0x1400000 00:21:19.384 ftl0 : 4.01 8515.56 33.26 0.00 0.00 14974.90 225.28 26416.05 00:21:19.384 [2024-11-27T04:38:15.971Z] =================================================================================================================== 00:21:19.384 [2024-11-27T04:38:15.971Z] Total : 8515.56 33.26 0.00 0.00 14974.90 0.00 26416.05 00:21:19.384 [2024-11-27 04:38:15.667720] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:21:19.384 { 00:21:19.384 "results": [ 00:21:19.384 { 00:21:19.384 "job": "ftl0", 00:21:19.384 "core_mask": "0x1", 00:21:19.384 "workload": "verify", 00:21:19.384 "status": "finished", 00:21:19.384 "verify_range": { 00:21:19.384 "start": 0, 00:21:19.384 "length": 20971520 00:21:19.384 }, 00:21:19.384 "queue_depth": 128, 00:21:19.384 "io_size": 4096, 00:21:19.384 "runtime": 4.013946, 00:21:19.384 "iops": 8515.560498322599, 00:21:19.384 "mibps": 33.26390819657265, 00:21:19.384 "io_failed": 0, 00:21:19.384 "io_timeout": 0, 00:21:19.384 "avg_latency_us": 14974.900178191661, 00:21:19.384 "min_latency_us": 225.28, 00:21:19.384 "max_latency_us": 26416.04923076923 00:21:19.384 } 00:21:19.384 ], 00:21:19.384 "core_count": 1 00:21:19.384 } 00:21:19.384 04:38:15 ftl.ftl_bdevperf -- ftl/bdevperf.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_delete -b ftl0 00:21:19.384 [2024-11-27 04:38:15.881561] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:19.385 [2024-11-27 04:38:15.881620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:21:19.385 [2024-11-27 04:38:15.881635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:21:19.385 [2024-11-27 04:38:15.881644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:19.385 [2024-11-27 04:38:15.881667] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:19.385 [2024-11-27 04:38:15.884280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:19.385 [2024-11-27 04:38:15.884312] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:21:19.385 [2024-11-27 04:38:15.884325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.594 ms 00:21:19.385 [2024-11-27 04:38:15.884333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:19.385 [2024-11-27 04:38:15.885911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:19.385 [2024-11-27 04:38:15.886043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:21:19.385 [2024-11-27 04:38:15.886068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.550 ms 00:21:19.385 [2024-11-27 04:38:15.886076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:19.647 [2024-11-27 04:38:16.039776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:19.647 [2024-11-27 04:38:16.039821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:21:19.647 [2024-11-27 04:38:16.039837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 153.673 ms 00:21:19.647 [2024-11-27 04:38:16.039845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:19.647 [2024-11-27 04:38:16.046010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:19.647 [2024-11-27 04:38:16.046041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:21:19.647 [2024-11-27 04:38:16.046056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.129 ms 00:21:19.647 [2024-11-27 04:38:16.046064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:19.647 [2024-11-27 04:38:16.069728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:19.647 [2024-11-27 04:38:16.069765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:21:19.647 [2024-11-27 04:38:16.069779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.598 ms 00:21:19.647 [2024-11-27 04:38:16.069786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:19.647 [2024-11-27 04:38:16.083851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:19.647 [2024-11-27 04:38:16.083887] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:21:19.647 [2024-11-27 04:38:16.083902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.025 ms 00:21:19.647 [2024-11-27 04:38:16.083911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:19.647 [2024-11-27 04:38:16.084048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:19.647 [2024-11-27 04:38:16.084059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:21:19.647 [2024-11-27 04:38:16.084071] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.099 ms 00:21:19.647 [2024-11-27 04:38:16.084078] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:19.647 [2024-11-27 04:38:16.106311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:19.647 [2024-11-27 04:38:16.106346] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:21:19.647 [2024-11-27 04:38:16.106358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.214 ms 00:21:19.647 [2024-11-27 04:38:16.106365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:19.647 [2024-11-27 04:38:16.128369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:19.648 [2024-11-27 04:38:16.128408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:21:19.648 [2024-11-27 04:38:16.128421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.966 ms 00:21:19.648 [2024-11-27 04:38:16.128429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:19.648 [2024-11-27 04:38:16.150474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:19.648 [2024-11-27 04:38:16.150507] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:21:19.648 [2024-11-27 04:38:16.150519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.007 ms 00:21:19.648 [2024-11-27 04:38:16.150526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:19.648 [2024-11-27 04:38:16.173411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:19.648 [2024-11-27 04:38:16.173610] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:21:19.648 [2024-11-27 04:38:16.173635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.815 ms 00:21:19.648 [2024-11-27 04:38:16.173643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:19.648 [2024-11-27 04:38:16.173680] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:21:19.648 [2024-11-27 04:38:16.173694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:21:19.648 [2024-11-27 04:38:16.173706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:21:19.648 [2024-11-27 04:38:16.173714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:21:19.648 [2024-11-27 04:38:16.173741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:19.648 [2024-11-27 04:38:16.173750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:19.648 [2024-11-27 04:38:16.173759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:19.648 [2024-11-27 04:38:16.173767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:19.648 [2024-11-27 04:38:16.173777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:19.648 [2024-11-27 04:38:16.173785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:19.648 [2024-11-27 04:38:16.173794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:19.648 [2024-11-27 04:38:16.173802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:19.648 [2024-11-27 04:38:16.173811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:19.648 [2024-11-27 04:38:16.173818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:19.648 [2024-11-27 04:38:16.173829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:19.648 [2024-11-27 04:38:16.173836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:19.648 [2024-11-27 04:38:16.173845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:19.648 [2024-11-27 04:38:16.173853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:19.648 [2024-11-27 04:38:16.173862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:19.648 [2024-11-27 04:38:16.173869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:21:19.648 [2024-11-27 04:38:16.173880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:21:19.648 [2024-11-27 04:38:16.173887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:21:19.648 [2024-11-27 04:38:16.173896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:21:19.648 [2024-11-27 04:38:16.173904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:21:19.648 [2024-11-27 04:38:16.173912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:21:19.648 [2024-11-27 04:38:16.173920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:21:19.648 [2024-11-27 04:38:16.173929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:21:19.648 [2024-11-27 04:38:16.173936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:21:19.648 [2024-11-27 04:38:16.173946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:21:19.648 [2024-11-27 04:38:16.173954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:21:19.648 [2024-11-27 04:38:16.173965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:21:19.648 [2024-11-27 04:38:16.173972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:21:19.648 [2024-11-27 04:38:16.173981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:21:19.648 [2024-11-27 04:38:16.173988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:21:19.648 [2024-11-27 04:38:16.173998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:21:19.648 [2024-11-27 04:38:16.174006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:21:19.648 [2024-11-27 04:38:16.174015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:21:19.648 [2024-11-27 04:38:16.174022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:21:19.648 [2024-11-27 04:38:16.174031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:21:19.648 [2024-11-27 04:38:16.174045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:21:19.648 [2024-11-27 04:38:16.174054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:21:19.648 [2024-11-27 04:38:16.174061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:21:19.648 [2024-11-27 04:38:16.174070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:21:19.648 [2024-11-27 04:38:16.174077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:21:19.648 [2024-11-27 04:38:16.174086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:21:19.648 [2024-11-27 04:38:16.174093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:21:19.648 [2024-11-27 04:38:16.174105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:21:19.648 [2024-11-27 04:38:16.174112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:21:19.648 [2024-11-27 04:38:16.174121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:21:19.648 [2024-11-27 04:38:16.174128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:21:19.648 [2024-11-27 04:38:16.174138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:21:19.648 [2024-11-27 04:38:16.174145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:21:19.648 [2024-11-27 04:38:16.174154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:21:19.648 [2024-11-27 04:38:16.174161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:21:19.648 [2024-11-27 04:38:16.174170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:21:19.649 [2024-11-27 04:38:16.174177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:21:19.649 [2024-11-27 04:38:16.174186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:21:19.649 [2024-11-27 04:38:16.174194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:21:19.649 [2024-11-27 04:38:16.174203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:21:19.649 [2024-11-27 04:38:16.174210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:21:19.649 [2024-11-27 04:38:16.174219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:21:19.649 [2024-11-27 04:38:16.174226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:21:19.649 [2024-11-27 04:38:16.174237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:21:19.649 [2024-11-27 04:38:16.174244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:21:19.649 [2024-11-27 04:38:16.174253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:21:19.649 [2024-11-27 04:38:16.174260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:21:19.649 [2024-11-27 04:38:16.174270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:21:19.649 [2024-11-27 04:38:16.174277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:21:19.649 [2024-11-27 04:38:16.174287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:21:19.649 [2024-11-27 04:38:16.174294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:21:19.649 [2024-11-27 04:38:16.174303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:21:19.649 [2024-11-27 04:38:16.174310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:21:19.649 [2024-11-27 04:38:16.174320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:21:19.649 [2024-11-27 04:38:16.174327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:21:19.649 [2024-11-27 04:38:16.174336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:21:19.649 [2024-11-27 04:38:16.174348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:21:19.649 [2024-11-27 04:38:16.174361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:21:19.649 [2024-11-27 04:38:16.174373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:21:19.649 [2024-11-27 04:38:16.174386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:21:19.649 [2024-11-27 04:38:16.174393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:21:19.649 [2024-11-27 04:38:16.174402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:21:19.649 [2024-11-27 04:38:16.174409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:21:19.649 [2024-11-27 04:38:16.174418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:21:19.649 [2024-11-27 04:38:16.174425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:21:19.649 [2024-11-27 04:38:16.174434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:21:19.649 [2024-11-27 04:38:16.174441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:21:19.649 [2024-11-27 04:38:16.174451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:21:19.649 [2024-11-27 04:38:16.174458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:21:19.649 [2024-11-27 04:38:16.174467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:21:19.649 [2024-11-27 04:38:16.174474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:21:19.649 [2024-11-27 04:38:16.174482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:21:19.649 [2024-11-27 04:38:16.174490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:21:19.649 [2024-11-27 04:38:16.174499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:21:19.649 [2024-11-27 04:38:16.174506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:21:19.649 [2024-11-27 04:38:16.174517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:21:19.649 [2024-11-27 04:38:16.174524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:21:19.649 [2024-11-27 04:38:16.174532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:21:19.649 [2024-11-27 04:38:16.174540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:21:19.649 [2024-11-27 04:38:16.174551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:21:19.649 [2024-11-27 04:38:16.174559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:21:19.649 [2024-11-27 04:38:16.174568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:21:19.649 [2024-11-27 04:38:16.174583] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:21:19.649 [2024-11-27 04:38:16.174595] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 22594052-4043-486c-be41-3d6626378139 00:21:19.649 [2024-11-27 04:38:16.174602] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:21:19.649 [2024-11-27 04:38:16.174611] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:21:19.649 [2024-11-27 04:38:16.174618] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:21:19.649 [2024-11-27 04:38:16.174627] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:21:19.649 [2024-11-27 04:38:16.174634] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:21:19.649 [2024-11-27 04:38:16.174643] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:21:19.649 [2024-11-27 04:38:16.174650] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:21:19.649 [2024-11-27 04:38:16.174659] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:21:19.649 [2024-11-27 04:38:16.174665] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:21:19.649 [2024-11-27 04:38:16.174675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:19.649 [2024-11-27 04:38:16.174682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:21:19.649 [2024-11-27 04:38:16.174692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.996 ms 00:21:19.649 [2024-11-27 04:38:16.174699] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:19.649 [2024-11-27 04:38:16.187178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:19.649 [2024-11-27 04:38:16.187212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:21:19.649 [2024-11-27 04:38:16.187225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.389 ms 00:21:19.649 [2024-11-27 04:38:16.187233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:19.649 [2024-11-27 04:38:16.187600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:19.649 [2024-11-27 04:38:16.187652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:21:19.649 [2024-11-27 04:38:16.187662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.326 ms 00:21:19.649 [2024-11-27 04:38:16.187672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:19.650 [2024-11-27 04:38:16.222493] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:19.650 [2024-11-27 04:38:16.222544] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:19.650 [2024-11-27 04:38:16.222560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:19.650 [2024-11-27 04:38:16.222569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:19.650 [2024-11-27 04:38:16.222637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:19.650 [2024-11-27 04:38:16.222645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:19.650 [2024-11-27 04:38:16.222655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:19.650 [2024-11-27 04:38:16.222664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:19.650 [2024-11-27 04:38:16.222769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:19.650 [2024-11-27 04:38:16.222780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:19.650 [2024-11-27 04:38:16.222790] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:19.650 [2024-11-27 04:38:16.222798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:19.650 [2024-11-27 04:38:16.222815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:19.650 [2024-11-27 04:38:16.222823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:19.650 [2024-11-27 04:38:16.222832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:19.650 [2024-11-27 04:38:16.222839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:19.910 [2024-11-27 04:38:16.300355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:19.910 [2024-11-27 04:38:16.300406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:19.910 [2024-11-27 04:38:16.300421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:19.910 [2024-11-27 04:38:16.300429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:19.910 [2024-11-27 04:38:16.365043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:19.910 [2024-11-27 04:38:16.365101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:19.910 [2024-11-27 04:38:16.365116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:19.910 [2024-11-27 04:38:16.365127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:19.910 [2024-11-27 04:38:16.365223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:19.910 [2024-11-27 04:38:16.365235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:19.910 [2024-11-27 04:38:16.365245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:19.910 [2024-11-27 04:38:16.365252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:19.910 [2024-11-27 04:38:16.365294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:19.910 [2024-11-27 04:38:16.365303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:19.910 [2024-11-27 04:38:16.365312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:19.910 [2024-11-27 04:38:16.365319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:19.910 [2024-11-27 04:38:16.365418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:19.910 [2024-11-27 04:38:16.365429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:19.911 [2024-11-27 04:38:16.365440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:19.911 [2024-11-27 04:38:16.365448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:19.911 [2024-11-27 04:38:16.365479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:19.911 [2024-11-27 04:38:16.365487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:21:19.911 [2024-11-27 04:38:16.365496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:19.911 [2024-11-27 04:38:16.365503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:19.911 [2024-11-27 04:38:16.365536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:19.911 [2024-11-27 04:38:16.365546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:19.911 [2024-11-27 04:38:16.365555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:19.911 [2024-11-27 04:38:16.365569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:19.911 [2024-11-27 04:38:16.365608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:19.911 [2024-11-27 04:38:16.365617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:19.911 [2024-11-27 04:38:16.365626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:19.911 [2024-11-27 04:38:16.365633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:19.911 [2024-11-27 04:38:16.365775] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 484.150 ms, result 0 00:21:19.911 true 00:21:19.911 04:38:16 ftl.ftl_bdevperf -- ftl/bdevperf.sh@36 -- # killprocess 75962 00:21:19.911 04:38:16 ftl.ftl_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 75962 ']' 00:21:19.911 04:38:16 ftl.ftl_bdevperf -- common/autotest_common.sh@958 -- # kill -0 75962 00:21:19.911 04:38:16 ftl.ftl_bdevperf -- common/autotest_common.sh@959 -- # uname 00:21:19.911 04:38:16 ftl.ftl_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:19.911 04:38:16 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75962 00:21:19.911 killing process with pid 75962 00:21:19.911 Received shutdown signal, test time was about 4.000000 seconds 00:21:19.911 00:21:19.911 Latency(us) 00:21:19.911 [2024-11-27T04:38:16.498Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:19.911 [2024-11-27T04:38:16.498Z] =================================================================================================================== 00:21:19.911 [2024-11-27T04:38:16.498Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:19.911 04:38:16 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:19.911 04:38:16 ftl.ftl_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:19.911 04:38:16 ftl.ftl_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75962' 00:21:19.911 04:38:16 ftl.ftl_bdevperf -- common/autotest_common.sh@973 -- # kill 75962 00:21:19.911 04:38:16 ftl.ftl_bdevperf -- common/autotest_common.sh@978 -- # wait 75962 00:21:32.144 Remove shared memory files 00:21:32.144 04:38:26 ftl.ftl_bdevperf -- ftl/bdevperf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:21:32.144 04:38:26 ftl.ftl_bdevperf -- ftl/bdevperf.sh@39 -- # remove_shm 00:21:32.144 04:38:26 ftl.ftl_bdevperf -- ftl/common.sh@204 -- # echo Remove shared memory files 00:21:32.144 04:38:26 ftl.ftl_bdevperf -- ftl/common.sh@205 -- # rm -f rm -f 00:21:32.144 04:38:26 ftl.ftl_bdevperf -- ftl/common.sh@206 -- # rm -f rm -f 00:21:32.144 04:38:26 ftl.ftl_bdevperf -- ftl/common.sh@207 -- # rm -f rm -f 00:21:32.144 04:38:26 ftl.ftl_bdevperf -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:21:32.144 04:38:26 ftl.ftl_bdevperf -- ftl/common.sh@209 -- # rm -f rm -f 00:21:32.144 ************************************ 00:21:32.144 END TEST ftl_bdevperf 00:21:32.144 ************************************ 00:21:32.144 00:21:32.144 real 0m30.112s 00:21:32.144 user 0m32.937s 00:21:32.144 sys 0m0.951s 00:21:32.144 04:38:26 ftl.ftl_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:32.144 04:38:26 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:21:32.144 04:38:26 ftl -- ftl/ftl.sh@75 -- # run_test ftl_trim /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:21:32.144 04:38:26 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:21:32.144 04:38:26 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:32.144 04:38:26 ftl -- common/autotest_common.sh@10 -- # set +x 00:21:32.144 ************************************ 00:21:32.144 START TEST ftl_trim 00:21:32.144 ************************************ 00:21:32.144 04:38:26 ftl.ftl_trim -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:21:32.144 * Looking for test storage... 00:21:32.144 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:21:32.144 04:38:26 ftl.ftl_trim -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:32.144 04:38:26 ftl.ftl_trim -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:32.144 04:38:26 ftl.ftl_trim -- common/autotest_common.sh@1693 -- # lcov --version 00:21:32.144 04:38:26 ftl.ftl_trim -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:32.144 04:38:26 ftl.ftl_trim -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:32.144 04:38:26 ftl.ftl_trim -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:32.144 04:38:26 ftl.ftl_trim -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:32.144 04:38:26 ftl.ftl_trim -- scripts/common.sh@336 -- # IFS=.-: 00:21:32.144 04:38:26 ftl.ftl_trim -- scripts/common.sh@336 -- # read -ra ver1 00:21:32.144 04:38:26 ftl.ftl_trim -- scripts/common.sh@337 -- # IFS=.-: 00:21:32.144 04:38:26 ftl.ftl_trim -- scripts/common.sh@337 -- # read -ra ver2 00:21:32.144 04:38:26 ftl.ftl_trim -- scripts/common.sh@338 -- # local 'op=<' 00:21:32.144 04:38:26 ftl.ftl_trim -- scripts/common.sh@340 -- # ver1_l=2 00:21:32.144 04:38:26 ftl.ftl_trim -- scripts/common.sh@341 -- # ver2_l=1 00:21:32.144 04:38:26 ftl.ftl_trim -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:32.144 04:38:26 ftl.ftl_trim -- scripts/common.sh@344 -- # case "$op" in 00:21:32.144 04:38:26 ftl.ftl_trim -- scripts/common.sh@345 -- # : 1 00:21:32.144 04:38:26 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:32.144 04:38:26 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:32.144 04:38:26 ftl.ftl_trim -- scripts/common.sh@365 -- # decimal 1 00:21:32.144 04:38:26 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=1 00:21:32.144 04:38:26 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:32.144 04:38:26 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 1 00:21:32.144 04:38:26 ftl.ftl_trim -- scripts/common.sh@365 -- # ver1[v]=1 00:21:32.144 04:38:26 ftl.ftl_trim -- scripts/common.sh@366 -- # decimal 2 00:21:32.144 04:38:26 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=2 00:21:32.144 04:38:26 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:32.144 04:38:26 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 2 00:21:32.144 04:38:26 ftl.ftl_trim -- scripts/common.sh@366 -- # ver2[v]=2 00:21:32.144 04:38:26 ftl.ftl_trim -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:32.144 04:38:26 ftl.ftl_trim -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:32.144 04:38:26 ftl.ftl_trim -- scripts/common.sh@368 -- # return 0 00:21:32.144 04:38:26 ftl.ftl_trim -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:32.144 04:38:26 ftl.ftl_trim -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:32.144 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:32.144 --rc genhtml_branch_coverage=1 00:21:32.144 --rc genhtml_function_coverage=1 00:21:32.144 --rc genhtml_legend=1 00:21:32.144 --rc geninfo_all_blocks=1 00:21:32.144 --rc geninfo_unexecuted_blocks=1 00:21:32.144 00:21:32.144 ' 00:21:32.144 04:38:26 ftl.ftl_trim -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:32.144 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:32.144 --rc genhtml_branch_coverage=1 00:21:32.144 --rc genhtml_function_coverage=1 00:21:32.144 --rc genhtml_legend=1 00:21:32.144 --rc geninfo_all_blocks=1 00:21:32.144 --rc geninfo_unexecuted_blocks=1 00:21:32.144 00:21:32.144 ' 00:21:32.144 04:38:26 ftl.ftl_trim -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:32.144 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:32.144 --rc genhtml_branch_coverage=1 00:21:32.144 --rc genhtml_function_coverage=1 00:21:32.144 --rc genhtml_legend=1 00:21:32.145 --rc geninfo_all_blocks=1 00:21:32.145 --rc geninfo_unexecuted_blocks=1 00:21:32.145 00:21:32.145 ' 00:21:32.145 04:38:26 ftl.ftl_trim -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:32.145 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:32.145 --rc genhtml_branch_coverage=1 00:21:32.145 --rc genhtml_function_coverage=1 00:21:32.145 --rc genhtml_legend=1 00:21:32.145 --rc geninfo_all_blocks=1 00:21:32.145 --rc geninfo_unexecuted_blocks=1 00:21:32.145 00:21:32.145 ' 00:21:32.145 04:38:26 ftl.ftl_trim -- ftl/trim.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:21:32.145 04:38:26 ftl.ftl_trim -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 00:21:32.145 04:38:26 ftl.ftl_trim -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:21:32.145 04:38:26 ftl.ftl_trim -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:21:32.145 04:38:26 ftl.ftl_trim -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:21:32.145 04:38:26 ftl.ftl_trim -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:21:32.145 04:38:26 ftl.ftl_trim -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:32.145 04:38:26 ftl.ftl_trim -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:21:32.145 04:38:26 ftl.ftl_trim -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:21:32.145 04:38:26 ftl.ftl_trim -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:32.145 04:38:26 ftl.ftl_trim -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:32.145 04:38:26 ftl.ftl_trim -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:21:32.145 04:38:26 ftl.ftl_trim -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:21:32.145 04:38:26 ftl.ftl_trim -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:21:32.145 04:38:26 ftl.ftl_trim -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:21:32.145 04:38:26 ftl.ftl_trim -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:21:32.145 04:38:26 ftl.ftl_trim -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:21:32.145 04:38:26 ftl.ftl_trim -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:32.145 04:38:26 ftl.ftl_trim -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:32.145 04:38:26 ftl.ftl_trim -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:21:32.145 04:38:26 ftl.ftl_trim -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:21:32.145 04:38:26 ftl.ftl_trim -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:21:32.145 04:38:26 ftl.ftl_trim -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:21:32.145 04:38:26 ftl.ftl_trim -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:21:32.145 04:38:26 ftl.ftl_trim -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:21:32.145 04:38:26 ftl.ftl_trim -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:21:32.145 04:38:26 ftl.ftl_trim -- ftl/common.sh@23 -- # spdk_ini_pid= 00:21:32.145 04:38:26 ftl.ftl_trim -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:21:32.145 04:38:26 ftl.ftl_trim -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:21:32.145 04:38:26 ftl.ftl_trim -- ftl/trim.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:32.145 04:38:26 ftl.ftl_trim -- ftl/trim.sh@23 -- # device=0000:00:11.0 00:21:32.145 04:38:26 ftl.ftl_trim -- ftl/trim.sh@24 -- # cache_device=0000:00:10.0 00:21:32.145 04:38:26 ftl.ftl_trim -- ftl/trim.sh@25 -- # timeout=240 00:21:32.145 04:38:26 ftl.ftl_trim -- ftl/trim.sh@26 -- # data_size_in_blocks=65536 00:21:32.145 04:38:26 ftl.ftl_trim -- ftl/trim.sh@27 -- # unmap_size_in_blocks=1024 00:21:32.145 04:38:26 ftl.ftl_trim -- ftl/trim.sh@29 -- # [[ y != y ]] 00:21:32.145 04:38:26 ftl.ftl_trim -- ftl/trim.sh@34 -- # export FTL_BDEV_NAME=ftl0 00:21:32.145 04:38:26 ftl.ftl_trim -- ftl/trim.sh@34 -- # FTL_BDEV_NAME=ftl0 00:21:32.145 04:38:26 ftl.ftl_trim -- ftl/trim.sh@35 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:21:32.145 04:38:26 ftl.ftl_trim -- ftl/trim.sh@35 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:21:32.145 04:38:26 ftl.ftl_trim -- ftl/trim.sh@37 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:21:32.145 04:38:26 ftl.ftl_trim -- ftl/trim.sh@40 -- # svcpid=76293 00:21:32.145 04:38:26 ftl.ftl_trim -- ftl/trim.sh@41 -- # waitforlisten 76293 00:21:32.145 04:38:26 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 76293 ']' 00:21:32.145 04:38:26 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:32.145 04:38:26 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:32.145 04:38:26 ftl.ftl_trim -- ftl/trim.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:21:32.145 04:38:26 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:32.145 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:32.145 04:38:26 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:32.145 04:38:26 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:21:32.145 [2024-11-27 04:38:27.069136] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:21:32.145 [2024-11-27 04:38:27.069462] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76293 ] 00:21:32.145 [2024-11-27 04:38:27.244341] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:32.145 [2024-11-27 04:38:27.348534] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:32.145 [2024-11-27 04:38:27.348599] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:32.145 [2024-11-27 04:38:27.348602] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:32.145 04:38:27 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:32.145 04:38:27 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:21:32.145 04:38:27 ftl.ftl_trim -- ftl/trim.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:21:32.145 04:38:27 ftl.ftl_trim -- ftl/common.sh@54 -- # local name=nvme0 00:21:32.145 04:38:27 ftl.ftl_trim -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:21:32.145 04:38:27 ftl.ftl_trim -- ftl/common.sh@56 -- # local size=103424 00:21:32.145 04:38:27 ftl.ftl_trim -- ftl/common.sh@59 -- # local base_bdev 00:21:32.145 04:38:27 ftl.ftl_trim -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:21:32.145 04:38:28 ftl.ftl_trim -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:21:32.145 04:38:28 ftl.ftl_trim -- ftl/common.sh@62 -- # local base_size 00:21:32.145 04:38:28 ftl.ftl_trim -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:21:32.145 04:38:28 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:21:32.145 04:38:28 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:21:32.145 04:38:28 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:21:32.145 04:38:28 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:21:32.145 04:38:28 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:21:32.145 04:38:28 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:21:32.145 { 00:21:32.145 "name": "nvme0n1", 00:21:32.145 "aliases": [ 00:21:32.145 "b1e845d0-199b-440a-9edd-acfe52d9bbb8" 00:21:32.145 ], 00:21:32.145 "product_name": "NVMe disk", 00:21:32.145 "block_size": 4096, 00:21:32.145 "num_blocks": 1310720, 00:21:32.145 "uuid": "b1e845d0-199b-440a-9edd-acfe52d9bbb8", 00:21:32.145 "numa_id": -1, 00:21:32.145 "assigned_rate_limits": { 00:21:32.145 "rw_ios_per_sec": 0, 00:21:32.145 "rw_mbytes_per_sec": 0, 00:21:32.145 "r_mbytes_per_sec": 0, 00:21:32.145 "w_mbytes_per_sec": 0 00:21:32.145 }, 00:21:32.145 "claimed": true, 00:21:32.145 "claim_type": "read_many_write_one", 00:21:32.145 "zoned": false, 00:21:32.145 "supported_io_types": { 00:21:32.145 "read": true, 00:21:32.145 "write": true, 00:21:32.145 "unmap": true, 00:21:32.145 "flush": true, 00:21:32.145 "reset": true, 00:21:32.145 "nvme_admin": true, 00:21:32.145 "nvme_io": true, 00:21:32.145 "nvme_io_md": false, 00:21:32.145 "write_zeroes": true, 00:21:32.145 "zcopy": false, 00:21:32.145 "get_zone_info": false, 00:21:32.145 "zone_management": false, 00:21:32.145 "zone_append": false, 00:21:32.145 "compare": true, 00:21:32.145 "compare_and_write": false, 00:21:32.145 "abort": true, 00:21:32.145 "seek_hole": false, 00:21:32.145 "seek_data": false, 00:21:32.145 "copy": true, 00:21:32.145 "nvme_iov_md": false 00:21:32.145 }, 00:21:32.145 "driver_specific": { 00:21:32.145 "nvme": [ 00:21:32.145 { 00:21:32.145 "pci_address": "0000:00:11.0", 00:21:32.145 "trid": { 00:21:32.145 "trtype": "PCIe", 00:21:32.145 "traddr": "0000:00:11.0" 00:21:32.145 }, 00:21:32.145 "ctrlr_data": { 00:21:32.145 "cntlid": 0, 00:21:32.145 "vendor_id": "0x1b36", 00:21:32.145 "model_number": "QEMU NVMe Ctrl", 00:21:32.145 "serial_number": "12341", 00:21:32.145 "firmware_revision": "8.0.0", 00:21:32.145 "subnqn": "nqn.2019-08.org.qemu:12341", 00:21:32.145 "oacs": { 00:21:32.145 "security": 0, 00:21:32.145 "format": 1, 00:21:32.145 "firmware": 0, 00:21:32.145 "ns_manage": 1 00:21:32.145 }, 00:21:32.145 "multi_ctrlr": false, 00:21:32.145 "ana_reporting": false 00:21:32.145 }, 00:21:32.145 "vs": { 00:21:32.145 "nvme_version": "1.4" 00:21:32.145 }, 00:21:32.145 "ns_data": { 00:21:32.145 "id": 1, 00:21:32.145 "can_share": false 00:21:32.145 } 00:21:32.145 } 00:21:32.145 ], 00:21:32.145 "mp_policy": "active_passive" 00:21:32.145 } 00:21:32.145 } 00:21:32.145 ]' 00:21:32.145 04:38:28 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:21:32.145 04:38:28 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:21:32.145 04:38:28 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:21:32.145 04:38:28 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=1310720 00:21:32.146 04:38:28 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:21:32.146 04:38:28 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 5120 00:21:32.146 04:38:28 ftl.ftl_trim -- ftl/common.sh@63 -- # base_size=5120 00:21:32.146 04:38:28 ftl.ftl_trim -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:21:32.146 04:38:28 ftl.ftl_trim -- ftl/common.sh@67 -- # clear_lvols 00:21:32.146 04:38:28 ftl.ftl_trim -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:21:32.146 04:38:28 ftl.ftl_trim -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:21:32.407 04:38:28 ftl.ftl_trim -- ftl/common.sh@28 -- # stores=8c74b026-1dcc-4dde-a246-309ad2ae0ee9 00:21:32.407 04:38:28 ftl.ftl_trim -- ftl/common.sh@29 -- # for lvs in $stores 00:21:32.407 04:38:28 ftl.ftl_trim -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 8c74b026-1dcc-4dde-a246-309ad2ae0ee9 00:21:32.672 04:38:29 ftl.ftl_trim -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:21:32.939 04:38:29 ftl.ftl_trim -- ftl/common.sh@68 -- # lvs=72044cfe-54c3-46a0-8bef-a2899f98d9e0 00:21:32.939 04:38:29 ftl.ftl_trim -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 72044cfe-54c3-46a0-8bef-a2899f98d9e0 00:21:33.199 04:38:29 ftl.ftl_trim -- ftl/trim.sh@43 -- # split_bdev=346f6af0-83bf-4004-b193-783d42f0297b 00:21:33.199 04:38:29 ftl.ftl_trim -- ftl/trim.sh@44 -- # create_nv_cache_bdev nvc0 0000:00:10.0 346f6af0-83bf-4004-b193-783d42f0297b 00:21:33.199 04:38:29 ftl.ftl_trim -- ftl/common.sh@35 -- # local name=nvc0 00:21:33.199 04:38:29 ftl.ftl_trim -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:21:33.199 04:38:29 ftl.ftl_trim -- ftl/common.sh@37 -- # local base_bdev=346f6af0-83bf-4004-b193-783d42f0297b 00:21:33.199 04:38:29 ftl.ftl_trim -- ftl/common.sh@38 -- # local cache_size= 00:21:33.199 04:38:29 ftl.ftl_trim -- ftl/common.sh@41 -- # get_bdev_size 346f6af0-83bf-4004-b193-783d42f0297b 00:21:33.199 04:38:29 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=346f6af0-83bf-4004-b193-783d42f0297b 00:21:33.199 04:38:29 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:21:33.199 04:38:29 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:21:33.199 04:38:29 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:21:33.199 04:38:29 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 346f6af0-83bf-4004-b193-783d42f0297b 00:21:33.199 04:38:29 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:21:33.199 { 00:21:33.199 "name": "346f6af0-83bf-4004-b193-783d42f0297b", 00:21:33.199 "aliases": [ 00:21:33.199 "lvs/nvme0n1p0" 00:21:33.199 ], 00:21:33.199 "product_name": "Logical Volume", 00:21:33.199 "block_size": 4096, 00:21:33.199 "num_blocks": 26476544, 00:21:33.199 "uuid": "346f6af0-83bf-4004-b193-783d42f0297b", 00:21:33.199 "assigned_rate_limits": { 00:21:33.199 "rw_ios_per_sec": 0, 00:21:33.199 "rw_mbytes_per_sec": 0, 00:21:33.199 "r_mbytes_per_sec": 0, 00:21:33.199 "w_mbytes_per_sec": 0 00:21:33.199 }, 00:21:33.199 "claimed": false, 00:21:33.199 "zoned": false, 00:21:33.199 "supported_io_types": { 00:21:33.199 "read": true, 00:21:33.199 "write": true, 00:21:33.199 "unmap": true, 00:21:33.199 "flush": false, 00:21:33.199 "reset": true, 00:21:33.199 "nvme_admin": false, 00:21:33.199 "nvme_io": false, 00:21:33.199 "nvme_io_md": false, 00:21:33.199 "write_zeroes": true, 00:21:33.199 "zcopy": false, 00:21:33.199 "get_zone_info": false, 00:21:33.199 "zone_management": false, 00:21:33.199 "zone_append": false, 00:21:33.199 "compare": false, 00:21:33.199 "compare_and_write": false, 00:21:33.199 "abort": false, 00:21:33.199 "seek_hole": true, 00:21:33.199 "seek_data": true, 00:21:33.199 "copy": false, 00:21:33.199 "nvme_iov_md": false 00:21:33.199 }, 00:21:33.199 "driver_specific": { 00:21:33.199 "lvol": { 00:21:33.199 "lvol_store_uuid": "72044cfe-54c3-46a0-8bef-a2899f98d9e0", 00:21:33.199 "base_bdev": "nvme0n1", 00:21:33.199 "thin_provision": true, 00:21:33.199 "num_allocated_clusters": 0, 00:21:33.199 "snapshot": false, 00:21:33.199 "clone": false, 00:21:33.199 "esnap_clone": false 00:21:33.199 } 00:21:33.199 } 00:21:33.199 } 00:21:33.199 ]' 00:21:33.199 04:38:29 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:21:33.199 04:38:29 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:21:33.199 04:38:29 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:21:33.459 04:38:29 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:21:33.459 04:38:29 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:21:33.459 04:38:29 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:21:33.459 04:38:29 ftl.ftl_trim -- ftl/common.sh@41 -- # local base_size=5171 00:21:33.459 04:38:29 ftl.ftl_trim -- ftl/common.sh@44 -- # local nvc_bdev 00:21:33.459 04:38:29 ftl.ftl_trim -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:21:33.459 04:38:30 ftl.ftl_trim -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:21:33.459 04:38:30 ftl.ftl_trim -- ftl/common.sh@47 -- # [[ -z '' ]] 00:21:33.459 04:38:30 ftl.ftl_trim -- ftl/common.sh@48 -- # get_bdev_size 346f6af0-83bf-4004-b193-783d42f0297b 00:21:33.459 04:38:30 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=346f6af0-83bf-4004-b193-783d42f0297b 00:21:33.459 04:38:30 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:21:33.459 04:38:30 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:21:33.459 04:38:30 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:21:33.459 04:38:30 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 346f6af0-83bf-4004-b193-783d42f0297b 00:21:34.029 04:38:30 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:21:34.029 { 00:21:34.029 "name": "346f6af0-83bf-4004-b193-783d42f0297b", 00:21:34.029 "aliases": [ 00:21:34.029 "lvs/nvme0n1p0" 00:21:34.029 ], 00:21:34.029 "product_name": "Logical Volume", 00:21:34.029 "block_size": 4096, 00:21:34.029 "num_blocks": 26476544, 00:21:34.029 "uuid": "346f6af0-83bf-4004-b193-783d42f0297b", 00:21:34.029 "assigned_rate_limits": { 00:21:34.029 "rw_ios_per_sec": 0, 00:21:34.029 "rw_mbytes_per_sec": 0, 00:21:34.029 "r_mbytes_per_sec": 0, 00:21:34.029 "w_mbytes_per_sec": 0 00:21:34.029 }, 00:21:34.029 "claimed": false, 00:21:34.029 "zoned": false, 00:21:34.029 "supported_io_types": { 00:21:34.029 "read": true, 00:21:34.029 "write": true, 00:21:34.029 "unmap": true, 00:21:34.029 "flush": false, 00:21:34.029 "reset": true, 00:21:34.029 "nvme_admin": false, 00:21:34.029 "nvme_io": false, 00:21:34.029 "nvme_io_md": false, 00:21:34.029 "write_zeroes": true, 00:21:34.029 "zcopy": false, 00:21:34.029 "get_zone_info": false, 00:21:34.029 "zone_management": false, 00:21:34.029 "zone_append": false, 00:21:34.029 "compare": false, 00:21:34.029 "compare_and_write": false, 00:21:34.029 "abort": false, 00:21:34.029 "seek_hole": true, 00:21:34.029 "seek_data": true, 00:21:34.029 "copy": false, 00:21:34.029 "nvme_iov_md": false 00:21:34.029 }, 00:21:34.029 "driver_specific": { 00:21:34.029 "lvol": { 00:21:34.029 "lvol_store_uuid": "72044cfe-54c3-46a0-8bef-a2899f98d9e0", 00:21:34.029 "base_bdev": "nvme0n1", 00:21:34.029 "thin_provision": true, 00:21:34.029 "num_allocated_clusters": 0, 00:21:34.029 "snapshot": false, 00:21:34.029 "clone": false, 00:21:34.029 "esnap_clone": false 00:21:34.029 } 00:21:34.029 } 00:21:34.029 } 00:21:34.029 ]' 00:21:34.029 04:38:30 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:21:34.029 04:38:30 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:21:34.029 04:38:30 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:21:34.029 04:38:30 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:21:34.029 04:38:30 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:21:34.029 04:38:30 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:21:34.029 04:38:30 ftl.ftl_trim -- ftl/common.sh@48 -- # cache_size=5171 00:21:34.029 04:38:30 ftl.ftl_trim -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:21:34.029 04:38:30 ftl.ftl_trim -- ftl/trim.sh@44 -- # nv_cache=nvc0n1p0 00:21:34.029 04:38:30 ftl.ftl_trim -- ftl/trim.sh@46 -- # l2p_percentage=60 00:21:34.029 04:38:30 ftl.ftl_trim -- ftl/trim.sh@47 -- # get_bdev_size 346f6af0-83bf-4004-b193-783d42f0297b 00:21:34.029 04:38:30 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=346f6af0-83bf-4004-b193-783d42f0297b 00:21:34.029 04:38:30 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:21:34.029 04:38:30 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:21:34.030 04:38:30 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:21:34.030 04:38:30 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 346f6af0-83bf-4004-b193-783d42f0297b 00:21:34.290 04:38:30 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:21:34.290 { 00:21:34.290 "name": "346f6af0-83bf-4004-b193-783d42f0297b", 00:21:34.290 "aliases": [ 00:21:34.290 "lvs/nvme0n1p0" 00:21:34.290 ], 00:21:34.290 "product_name": "Logical Volume", 00:21:34.290 "block_size": 4096, 00:21:34.290 "num_blocks": 26476544, 00:21:34.290 "uuid": "346f6af0-83bf-4004-b193-783d42f0297b", 00:21:34.290 "assigned_rate_limits": { 00:21:34.290 "rw_ios_per_sec": 0, 00:21:34.290 "rw_mbytes_per_sec": 0, 00:21:34.290 "r_mbytes_per_sec": 0, 00:21:34.290 "w_mbytes_per_sec": 0 00:21:34.290 }, 00:21:34.290 "claimed": false, 00:21:34.290 "zoned": false, 00:21:34.290 "supported_io_types": { 00:21:34.290 "read": true, 00:21:34.290 "write": true, 00:21:34.290 "unmap": true, 00:21:34.290 "flush": false, 00:21:34.290 "reset": true, 00:21:34.290 "nvme_admin": false, 00:21:34.290 "nvme_io": false, 00:21:34.290 "nvme_io_md": false, 00:21:34.290 "write_zeroes": true, 00:21:34.290 "zcopy": false, 00:21:34.290 "get_zone_info": false, 00:21:34.290 "zone_management": false, 00:21:34.290 "zone_append": false, 00:21:34.290 "compare": false, 00:21:34.290 "compare_and_write": false, 00:21:34.290 "abort": false, 00:21:34.290 "seek_hole": true, 00:21:34.290 "seek_data": true, 00:21:34.290 "copy": false, 00:21:34.290 "nvme_iov_md": false 00:21:34.290 }, 00:21:34.290 "driver_specific": { 00:21:34.290 "lvol": { 00:21:34.290 "lvol_store_uuid": "72044cfe-54c3-46a0-8bef-a2899f98d9e0", 00:21:34.290 "base_bdev": "nvme0n1", 00:21:34.290 "thin_provision": true, 00:21:34.290 "num_allocated_clusters": 0, 00:21:34.290 "snapshot": false, 00:21:34.290 "clone": false, 00:21:34.290 "esnap_clone": false 00:21:34.290 } 00:21:34.290 } 00:21:34.290 } 00:21:34.290 ]' 00:21:34.290 04:38:30 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:21:34.290 04:38:30 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:21:34.290 04:38:30 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:21:34.552 04:38:30 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:21:34.552 04:38:30 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:21:34.552 04:38:30 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:21:34.552 04:38:30 ftl.ftl_trim -- ftl/trim.sh@47 -- # l2p_dram_size_mb=60 00:21:34.552 04:38:30 ftl.ftl_trim -- ftl/trim.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 346f6af0-83bf-4004-b193-783d42f0297b -c nvc0n1p0 --core_mask 7 --l2p_dram_limit 60 --overprovisioning 10 00:21:34.552 [2024-11-27 04:38:31.054255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.552 [2024-11-27 04:38:31.054311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:34.552 [2024-11-27 04:38:31.054327] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:34.552 [2024-11-27 04:38:31.054335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.552 [2024-11-27 04:38:31.057129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.552 [2024-11-27 04:38:31.057167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:34.552 [2024-11-27 04:38:31.057180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.769 ms 00:21:34.552 [2024-11-27 04:38:31.057189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.552 [2024-11-27 04:38:31.057317] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:34.552 [2024-11-27 04:38:31.058029] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:34.552 [2024-11-27 04:38:31.058059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.552 [2024-11-27 04:38:31.058068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:34.552 [2024-11-27 04:38:31.058078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.751 ms 00:21:34.552 [2024-11-27 04:38:31.058085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.552 [2024-11-27 04:38:31.058263] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID b290c8a9-93b3-40e8-884a-c7cbc275854f 00:21:34.552 [2024-11-27 04:38:31.059323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.552 [2024-11-27 04:38:31.059457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:21:34.552 [2024-11-27 04:38:31.059473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:21:34.552 [2024-11-27 04:38:31.059483] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.552 [2024-11-27 04:38:31.065021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.552 [2024-11-27 04:38:31.065133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:34.552 [2024-11-27 04:38:31.065190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.455 ms 00:21:34.552 [2024-11-27 04:38:31.065217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.552 [2024-11-27 04:38:31.065365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.552 [2024-11-27 04:38:31.065401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:34.552 [2024-11-27 04:38:31.065422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:21:34.552 [2024-11-27 04:38:31.065485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.552 [2024-11-27 04:38:31.065536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.552 [2024-11-27 04:38:31.065563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:34.552 [2024-11-27 04:38:31.065584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:21:34.552 [2024-11-27 04:38:31.065648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.552 [2024-11-27 04:38:31.065695] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:21:34.552 [2024-11-27 04:38:31.069375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.553 [2024-11-27 04:38:31.069476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:34.553 [2024-11-27 04:38:31.069525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.684 ms 00:21:34.553 [2024-11-27 04:38:31.069547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.553 [2024-11-27 04:38:31.069631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.553 [2024-11-27 04:38:31.069759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:34.553 [2024-11-27 04:38:31.069812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:21:34.553 [2024-11-27 04:38:31.069831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.553 [2024-11-27 04:38:31.069876] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:21:34.553 [2024-11-27 04:38:31.070025] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:21:34.553 [2024-11-27 04:38:31.070070] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:34.553 [2024-11-27 04:38:31.070104] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:21:34.553 [2024-11-27 04:38:31.070175] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:34.553 [2024-11-27 04:38:31.070285] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:34.553 [2024-11-27 04:38:31.070318] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:21:34.553 [2024-11-27 04:38:31.070337] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:34.553 [2024-11-27 04:38:31.070359] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:21:34.553 [2024-11-27 04:38:31.070377] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:21:34.553 [2024-11-27 04:38:31.070398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.553 [2024-11-27 04:38:31.070494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:34.553 [2024-11-27 04:38:31.070556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.523 ms 00:21:34.553 [2024-11-27 04:38:31.070574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.553 [2024-11-27 04:38:31.070687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.553 [2024-11-27 04:38:31.070766] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:34.553 [2024-11-27 04:38:31.070801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:21:34.553 [2024-11-27 04:38:31.070820] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.553 [2024-11-27 04:38:31.070956] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:34.553 [2024-11-27 04:38:31.070994] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:34.553 [2024-11-27 04:38:31.071028] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:34.553 [2024-11-27 04:38:31.071048] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:34.553 [2024-11-27 04:38:31.071068] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:34.553 [2024-11-27 04:38:31.071086] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:34.553 [2024-11-27 04:38:31.071106] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:21:34.553 [2024-11-27 04:38:31.071125] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:34.553 [2024-11-27 04:38:31.071225] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:21:34.553 [2024-11-27 04:38:31.071249] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:34.553 [2024-11-27 04:38:31.071269] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:34.553 [2024-11-27 04:38:31.071287] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:21:34.553 [2024-11-27 04:38:31.071340] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:34.553 [2024-11-27 04:38:31.071362] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:34.553 [2024-11-27 04:38:31.071383] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:21:34.553 [2024-11-27 04:38:31.071401] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:34.553 [2024-11-27 04:38:31.071422] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:34.553 [2024-11-27 04:38:31.071464] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:21:34.553 [2024-11-27 04:38:31.071486] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:34.553 [2024-11-27 04:38:31.071504] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:34.553 [2024-11-27 04:38:31.071523] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:21:34.553 [2024-11-27 04:38:31.071574] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:34.553 [2024-11-27 04:38:31.071598] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:34.553 [2024-11-27 04:38:31.071685] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:21:34.553 [2024-11-27 04:38:31.071709] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:34.553 [2024-11-27 04:38:31.071738] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:34.553 [2024-11-27 04:38:31.071759] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:21:34.553 [2024-11-27 04:38:31.071844] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:34.553 [2024-11-27 04:38:31.071868] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:34.553 [2024-11-27 04:38:31.071887] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:21:34.553 [2024-11-27 04:38:31.071907] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:34.553 [2024-11-27 04:38:31.071953] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:34.553 [2024-11-27 04:38:31.071979] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:21:34.553 [2024-11-27 04:38:31.071997] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:34.553 [2024-11-27 04:38:31.072017] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:34.553 [2024-11-27 04:38:31.072035] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:21:34.553 [2024-11-27 04:38:31.072084] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:34.553 [2024-11-27 04:38:31.072105] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:21:34.553 [2024-11-27 04:38:31.072125] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:21:34.553 [2024-11-27 04:38:31.072143] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:34.553 [2024-11-27 04:38:31.072163] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:21:34.553 [2024-11-27 04:38:31.072216] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:21:34.553 [2024-11-27 04:38:31.072241] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:34.553 [2024-11-27 04:38:31.072259] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:34.553 [2024-11-27 04:38:31.072280] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:34.553 [2024-11-27 04:38:31.072327] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:34.553 [2024-11-27 04:38:31.072353] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:34.553 [2024-11-27 04:38:31.072380] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:34.553 [2024-11-27 04:38:31.072402] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:34.553 [2024-11-27 04:38:31.072420] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:34.553 [2024-11-27 04:38:31.072440] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:34.553 [2024-11-27 04:38:31.072495] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:34.553 [2024-11-27 04:38:31.072521] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:34.553 [2024-11-27 04:38:31.072544] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:34.553 [2024-11-27 04:38:31.072576] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:34.553 [2024-11-27 04:38:31.072635] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:21:34.553 [2024-11-27 04:38:31.072665] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:21:34.553 [2024-11-27 04:38:31.072719] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:21:34.553 [2024-11-27 04:38:31.072809] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:21:34.553 [2024-11-27 04:38:31.072842] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:21:34.553 [2024-11-27 04:38:31.072898] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:21:34.553 [2024-11-27 04:38:31.072927] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:21:34.553 [2024-11-27 04:38:31.073036] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:21:34.553 [2024-11-27 04:38:31.073066] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:21:34.553 [2024-11-27 04:38:31.073127] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:21:34.553 [2024-11-27 04:38:31.073157] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:21:34.553 [2024-11-27 04:38:31.073211] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:21:34.553 [2024-11-27 04:38:31.073267] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:21:34.553 [2024-11-27 04:38:31.073322] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:21:34.553 [2024-11-27 04:38:31.073352] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:34.553 [2024-11-27 04:38:31.073414] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:34.553 [2024-11-27 04:38:31.073446] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:34.553 [2024-11-27 04:38:31.073498] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:34.554 [2024-11-27 04:38:31.073556] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:34.554 [2024-11-27 04:38:31.073622] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:34.554 [2024-11-27 04:38:31.073653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.554 [2024-11-27 04:38:31.073761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:34.554 [2024-11-27 04:38:31.073786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.768 ms 00:21:34.554 [2024-11-27 04:38:31.073807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.554 [2024-11-27 04:38:31.073913] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:21:34.554 [2024-11-27 04:38:31.073956] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:21:37.106 [2024-11-27 04:38:33.067362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.106 [2024-11-27 04:38:33.067580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:21:37.106 [2024-11-27 04:38:33.067600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1993.439 ms 00:21:37.106 [2024-11-27 04:38:33.067611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.106 [2024-11-27 04:38:33.093570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.106 [2024-11-27 04:38:33.093787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:37.106 [2024-11-27 04:38:33.093806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.693 ms 00:21:37.106 [2024-11-27 04:38:33.093816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.106 [2024-11-27 04:38:33.094009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.106 [2024-11-27 04:38:33.094030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:37.106 [2024-11-27 04:38:33.094063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.086 ms 00:21:37.106 [2024-11-27 04:38:33.094086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.106 [2024-11-27 04:38:33.141394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.106 [2024-11-27 04:38:33.141455] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:37.106 [2024-11-27 04:38:33.141468] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.267 ms 00:21:37.106 [2024-11-27 04:38:33.141479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.106 [2024-11-27 04:38:33.141588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.106 [2024-11-27 04:38:33.141602] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:37.106 [2024-11-27 04:38:33.141611] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:21:37.106 [2024-11-27 04:38:33.141620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.106 [2024-11-27 04:38:33.142006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.106 [2024-11-27 04:38:33.142032] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:37.106 [2024-11-27 04:38:33.142042] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.353 ms 00:21:37.106 [2024-11-27 04:38:33.142051] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.106 [2024-11-27 04:38:33.142184] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.106 [2024-11-27 04:38:33.142194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:37.106 [2024-11-27 04:38:33.142215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.099 ms 00:21:37.106 [2024-11-27 04:38:33.142226] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.106 [2024-11-27 04:38:33.156703] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.106 [2024-11-27 04:38:33.156774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:37.106 [2024-11-27 04:38:33.156787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.442 ms 00:21:37.106 [2024-11-27 04:38:33.156805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.106 [2024-11-27 04:38:33.168902] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:21:37.106 [2024-11-27 04:38:33.183937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.106 [2024-11-27 04:38:33.184153] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:37.106 [2024-11-27 04:38:33.184176] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.983 ms 00:21:37.106 [2024-11-27 04:38:33.184184] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.106 [2024-11-27 04:38:33.244170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.106 [2024-11-27 04:38:33.244235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:21:37.106 [2024-11-27 04:38:33.244257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 59.883 ms 00:21:37.106 [2024-11-27 04:38:33.244266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.106 [2024-11-27 04:38:33.244518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.106 [2024-11-27 04:38:33.244530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:37.106 [2024-11-27 04:38:33.244544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.145 ms 00:21:37.106 [2024-11-27 04:38:33.244552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.106 [2024-11-27 04:38:33.269216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.106 [2024-11-27 04:38:33.269275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:21:37.106 [2024-11-27 04:38:33.269292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.629 ms 00:21:37.106 [2024-11-27 04:38:33.269303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.106 [2024-11-27 04:38:33.292904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.106 [2024-11-27 04:38:33.292955] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:21:37.106 [2024-11-27 04:38:33.292970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.511 ms 00:21:37.106 [2024-11-27 04:38:33.292977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.106 [2024-11-27 04:38:33.293581] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.107 [2024-11-27 04:38:33.293603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:37.107 [2024-11-27 04:38:33.293615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.531 ms 00:21:37.107 [2024-11-27 04:38:33.293622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.107 [2024-11-27 04:38:33.360904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.107 [2024-11-27 04:38:33.360966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:21:37.107 [2024-11-27 04:38:33.360984] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 67.248 ms 00:21:37.107 [2024-11-27 04:38:33.360993] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.107 [2024-11-27 04:38:33.386088] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.107 [2024-11-27 04:38:33.386144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:21:37.107 [2024-11-27 04:38:33.386159] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.952 ms 00:21:37.107 [2024-11-27 04:38:33.386167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.107 [2024-11-27 04:38:33.411046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.107 [2024-11-27 04:38:33.411271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:21:37.107 [2024-11-27 04:38:33.411293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.796 ms 00:21:37.107 [2024-11-27 04:38:33.411300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.107 [2024-11-27 04:38:33.435515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.107 [2024-11-27 04:38:33.435579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:37.107 [2024-11-27 04:38:33.435593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.121 ms 00:21:37.107 [2024-11-27 04:38:33.435601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.107 [2024-11-27 04:38:33.435679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.107 [2024-11-27 04:38:33.435690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:37.107 [2024-11-27 04:38:33.435703] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:21:37.107 [2024-11-27 04:38:33.435711] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.107 [2024-11-27 04:38:33.435803] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.107 [2024-11-27 04:38:33.435813] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:37.107 [2024-11-27 04:38:33.435823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:21:37.107 [2024-11-27 04:38:33.435830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.107 [2024-11-27 04:38:33.436591] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:37.107 [2024-11-27 04:38:33.439931] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2382.065 ms, result 0 00:21:37.107 [2024-11-27 04:38:33.440710] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:37.107 { 00:21:37.107 "name": "ftl0", 00:21:37.107 "uuid": "b290c8a9-93b3-40e8-884a-c7cbc275854f" 00:21:37.107 } 00:21:37.107 04:38:33 ftl.ftl_trim -- ftl/trim.sh@51 -- # waitforbdev ftl0 00:21:37.107 04:38:33 ftl.ftl_trim -- common/autotest_common.sh@903 -- # local bdev_name=ftl0 00:21:37.107 04:38:33 ftl.ftl_trim -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:21:37.107 04:38:33 ftl.ftl_trim -- common/autotest_common.sh@905 -- # local i 00:21:37.107 04:38:33 ftl.ftl_trim -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:21:37.107 04:38:33 ftl.ftl_trim -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:21:37.107 04:38:33 ftl.ftl_trim -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:21:37.107 04:38:33 ftl.ftl_trim -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:21:37.367 [ 00:21:37.367 { 00:21:37.367 "name": "ftl0", 00:21:37.367 "aliases": [ 00:21:37.367 "b290c8a9-93b3-40e8-884a-c7cbc275854f" 00:21:37.367 ], 00:21:37.367 "product_name": "FTL disk", 00:21:37.367 "block_size": 4096, 00:21:37.367 "num_blocks": 23592960, 00:21:37.367 "uuid": "b290c8a9-93b3-40e8-884a-c7cbc275854f", 00:21:37.367 "assigned_rate_limits": { 00:21:37.367 "rw_ios_per_sec": 0, 00:21:37.367 "rw_mbytes_per_sec": 0, 00:21:37.367 "r_mbytes_per_sec": 0, 00:21:37.367 "w_mbytes_per_sec": 0 00:21:37.367 }, 00:21:37.367 "claimed": false, 00:21:37.367 "zoned": false, 00:21:37.367 "supported_io_types": { 00:21:37.367 "read": true, 00:21:37.367 "write": true, 00:21:37.367 "unmap": true, 00:21:37.367 "flush": true, 00:21:37.367 "reset": false, 00:21:37.367 "nvme_admin": false, 00:21:37.367 "nvme_io": false, 00:21:37.367 "nvme_io_md": false, 00:21:37.367 "write_zeroes": true, 00:21:37.367 "zcopy": false, 00:21:37.367 "get_zone_info": false, 00:21:37.367 "zone_management": false, 00:21:37.367 "zone_append": false, 00:21:37.367 "compare": false, 00:21:37.367 "compare_and_write": false, 00:21:37.367 "abort": false, 00:21:37.367 "seek_hole": false, 00:21:37.367 "seek_data": false, 00:21:37.367 "copy": false, 00:21:37.367 "nvme_iov_md": false 00:21:37.367 }, 00:21:37.367 "driver_specific": { 00:21:37.367 "ftl": { 00:21:37.367 "base_bdev": "346f6af0-83bf-4004-b193-783d42f0297b", 00:21:37.367 "cache": "nvc0n1p0" 00:21:37.367 } 00:21:37.367 } 00:21:37.367 } 00:21:37.367 ] 00:21:37.367 04:38:33 ftl.ftl_trim -- common/autotest_common.sh@911 -- # return 0 00:21:37.367 04:38:33 ftl.ftl_trim -- ftl/trim.sh@54 -- # echo '{"subsystems": [' 00:21:37.367 04:38:33 ftl.ftl_trim -- ftl/trim.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:21:37.628 04:38:34 ftl.ftl_trim -- ftl/trim.sh@56 -- # echo ']}' 00:21:37.628 04:38:34 ftl.ftl_trim -- ftl/trim.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 00:21:37.889 04:38:34 ftl.ftl_trim -- ftl/trim.sh@59 -- # bdev_info='[ 00:21:37.889 { 00:21:37.889 "name": "ftl0", 00:21:37.889 "aliases": [ 00:21:37.889 "b290c8a9-93b3-40e8-884a-c7cbc275854f" 00:21:37.889 ], 00:21:37.889 "product_name": "FTL disk", 00:21:37.889 "block_size": 4096, 00:21:37.889 "num_blocks": 23592960, 00:21:37.889 "uuid": "b290c8a9-93b3-40e8-884a-c7cbc275854f", 00:21:37.889 "assigned_rate_limits": { 00:21:37.889 "rw_ios_per_sec": 0, 00:21:37.889 "rw_mbytes_per_sec": 0, 00:21:37.889 "r_mbytes_per_sec": 0, 00:21:37.889 "w_mbytes_per_sec": 0 00:21:37.889 }, 00:21:37.889 "claimed": false, 00:21:37.889 "zoned": false, 00:21:37.889 "supported_io_types": { 00:21:37.889 "read": true, 00:21:37.889 "write": true, 00:21:37.889 "unmap": true, 00:21:37.889 "flush": true, 00:21:37.889 "reset": false, 00:21:37.889 "nvme_admin": false, 00:21:37.889 "nvme_io": false, 00:21:37.889 "nvme_io_md": false, 00:21:37.889 "write_zeroes": true, 00:21:37.889 "zcopy": false, 00:21:37.889 "get_zone_info": false, 00:21:37.889 "zone_management": false, 00:21:37.889 "zone_append": false, 00:21:37.889 "compare": false, 00:21:37.889 "compare_and_write": false, 00:21:37.889 "abort": false, 00:21:37.889 "seek_hole": false, 00:21:37.889 "seek_data": false, 00:21:37.889 "copy": false, 00:21:37.889 "nvme_iov_md": false 00:21:37.889 }, 00:21:37.889 "driver_specific": { 00:21:37.889 "ftl": { 00:21:37.889 "base_bdev": "346f6af0-83bf-4004-b193-783d42f0297b", 00:21:37.889 "cache": "nvc0n1p0" 00:21:37.889 } 00:21:37.889 } 00:21:37.889 } 00:21:37.889 ]' 00:21:37.889 04:38:34 ftl.ftl_trim -- ftl/trim.sh@60 -- # jq '.[] .num_blocks' 00:21:37.889 04:38:34 ftl.ftl_trim -- ftl/trim.sh@60 -- # nb=23592960 00:21:37.890 04:38:34 ftl.ftl_trim -- ftl/trim.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:21:38.151 [2024-11-27 04:38:34.476228] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:38.151 [2024-11-27 04:38:34.476428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:21:38.151 [2024-11-27 04:38:34.476448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:21:38.151 [2024-11-27 04:38:34.476457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.151 [2024-11-27 04:38:34.476490] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:21:38.151 [2024-11-27 04:38:34.478687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:38.151 [2024-11-27 04:38:34.478717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:21:38.151 [2024-11-27 04:38:34.478746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.178 ms 00:21:38.151 [2024-11-27 04:38:34.478753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.151 [2024-11-27 04:38:34.479166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:38.151 [2024-11-27 04:38:34.479181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:21:38.151 [2024-11-27 04:38:34.479191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.380 ms 00:21:38.151 [2024-11-27 04:38:34.479197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.151 [2024-11-27 04:38:34.482045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:38.151 [2024-11-27 04:38:34.482072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:21:38.151 [2024-11-27 04:38:34.482080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.825 ms 00:21:38.151 [2024-11-27 04:38:34.482087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.151 [2024-11-27 04:38:34.487681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:38.151 [2024-11-27 04:38:34.487847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:21:38.151 [2024-11-27 04:38:34.487864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.538 ms 00:21:38.151 [2024-11-27 04:38:34.487871] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.151 [2024-11-27 04:38:34.507157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:38.151 [2024-11-27 04:38:34.507344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:21:38.151 [2024-11-27 04:38:34.507369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.210 ms 00:21:38.151 [2024-11-27 04:38:34.507376] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.151 [2024-11-27 04:38:34.520002] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:38.151 [2024-11-27 04:38:34.520053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:21:38.151 [2024-11-27 04:38:34.520070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.550 ms 00:21:38.151 [2024-11-27 04:38:34.520077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.151 [2024-11-27 04:38:34.520288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:38.151 [2024-11-27 04:38:34.520297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:21:38.151 [2024-11-27 04:38:34.520306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.136 ms 00:21:38.151 [2024-11-27 04:38:34.520312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.151 [2024-11-27 04:38:34.539402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:38.151 [2024-11-27 04:38:34.539447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:21:38.151 [2024-11-27 04:38:34.539458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.059 ms 00:21:38.151 [2024-11-27 04:38:34.539465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.151 [2024-11-27 04:38:34.557564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:38.151 [2024-11-27 04:38:34.557605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:21:38.151 [2024-11-27 04:38:34.557619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.028 ms 00:21:38.151 [2024-11-27 04:38:34.557625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.151 [2024-11-27 04:38:34.575343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:38.152 [2024-11-27 04:38:34.575386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:21:38.152 [2024-11-27 04:38:34.575397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.659 ms 00:21:38.152 [2024-11-27 04:38:34.575404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.152 [2024-11-27 04:38:34.592917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:38.152 [2024-11-27 04:38:34.593075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:21:38.152 [2024-11-27 04:38:34.593094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.409 ms 00:21:38.152 [2024-11-27 04:38:34.593100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.152 [2024-11-27 04:38:34.593166] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:21:38.152 [2024-11-27 04:38:34.593178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:21:38.152 [2024-11-27 04:38:34.593188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:21:38.152 [2024-11-27 04:38:34.593194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:21:38.152 [2024-11-27 04:38:34.593201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:38.152 [2024-11-27 04:38:34.593208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:38.152 [2024-11-27 04:38:34.593218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:38.152 [2024-11-27 04:38:34.593224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:38.152 [2024-11-27 04:38:34.593243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:38.152 [2024-11-27 04:38:34.593249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:38.152 [2024-11-27 04:38:34.593256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:38.152 [2024-11-27 04:38:34.593262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:38.152 [2024-11-27 04:38:34.593269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:38.152 [2024-11-27 04:38:34.593275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:38.152 [2024-11-27 04:38:34.593283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:38.152 [2024-11-27 04:38:34.593289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:38.152 [2024-11-27 04:38:34.593296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:38.152 [2024-11-27 04:38:34.593302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:38.152 [2024-11-27 04:38:34.593309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:38.152 [2024-11-27 04:38:34.593315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:21:38.152 [2024-11-27 04:38:34.593338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:21:38.152 [2024-11-27 04:38:34.593343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:21:38.152 [2024-11-27 04:38:34.593352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:21:38.152 [2024-11-27 04:38:34.593358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:21:38.152 [2024-11-27 04:38:34.593366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:21:38.152 [2024-11-27 04:38:34.593372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:21:38.152 [2024-11-27 04:38:34.593379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:21:38.152 [2024-11-27 04:38:34.593385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:21:38.152 [2024-11-27 04:38:34.593393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:21:38.152 [2024-11-27 04:38:34.593399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:21:38.152 [2024-11-27 04:38:34.593407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:21:38.152 [2024-11-27 04:38:34.593413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:21:38.152 [2024-11-27 04:38:34.593420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:21:38.152 [2024-11-27 04:38:34.593427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:21:38.152 [2024-11-27 04:38:34.593434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:21:38.152 [2024-11-27 04:38:34.593440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:21:38.152 [2024-11-27 04:38:34.593448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:21:38.152 [2024-11-27 04:38:34.593454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:21:38.152 [2024-11-27 04:38:34.593462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:21:38.152 [2024-11-27 04:38:34.593468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:21:38.152 [2024-11-27 04:38:34.593475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:21:38.152 [2024-11-27 04:38:34.593481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:21:38.152 [2024-11-27 04:38:34.593489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:21:38.152 [2024-11-27 04:38:34.593494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:21:38.152 [2024-11-27 04:38:34.593502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:21:38.152 [2024-11-27 04:38:34.593508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:21:38.152 [2024-11-27 04:38:34.593515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:21:38.152 [2024-11-27 04:38:34.593521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:21:38.152 [2024-11-27 04:38:34.593528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:21:38.152 [2024-11-27 04:38:34.593533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:21:38.152 [2024-11-27 04:38:34.593540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:21:38.152 [2024-11-27 04:38:34.593546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:21:38.152 [2024-11-27 04:38:34.593554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:21:38.152 [2024-11-27 04:38:34.593559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:21:38.152 [2024-11-27 04:38:34.593567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:21:38.152 [2024-11-27 04:38:34.593573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:21:38.152 [2024-11-27 04:38:34.593580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:21:38.152 [2024-11-27 04:38:34.593586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:21:38.152 [2024-11-27 04:38:34.593594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:21:38.152 [2024-11-27 04:38:34.593599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:21:38.152 [2024-11-27 04:38:34.593607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:21:38.152 [2024-11-27 04:38:34.593612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:21:38.152 [2024-11-27 04:38:34.593619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:21:38.152 [2024-11-27 04:38:34.593625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:21:38.152 [2024-11-27 04:38:34.593632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:21:38.152 [2024-11-27 04:38:34.593638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:21:38.152 [2024-11-27 04:38:34.593646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:21:38.152 [2024-11-27 04:38:34.593652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:21:38.152 [2024-11-27 04:38:34.593659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:21:38.152 [2024-11-27 04:38:34.593665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:21:38.152 [2024-11-27 04:38:34.593675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:21:38.152 [2024-11-27 04:38:34.593680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:21:38.152 [2024-11-27 04:38:34.593688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:21:38.152 [2024-11-27 04:38:34.593694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:21:38.152 [2024-11-27 04:38:34.593701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:21:38.152 [2024-11-27 04:38:34.593706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:21:38.152 [2024-11-27 04:38:34.593714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:21:38.152 [2024-11-27 04:38:34.593719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:21:38.152 [2024-11-27 04:38:34.593741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:21:38.152 [2024-11-27 04:38:34.593748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:21:38.152 [2024-11-27 04:38:34.593756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:21:38.153 [2024-11-27 04:38:34.593762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:21:38.153 [2024-11-27 04:38:34.593769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:21:38.153 [2024-11-27 04:38:34.593775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:21:38.153 [2024-11-27 04:38:34.593782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:21:38.153 [2024-11-27 04:38:34.593788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:21:38.153 [2024-11-27 04:38:34.593798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:21:38.153 [2024-11-27 04:38:34.593804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:21:38.153 [2024-11-27 04:38:34.593811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:21:38.153 [2024-11-27 04:38:34.593817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:21:38.153 [2024-11-27 04:38:34.593824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:21:38.153 [2024-11-27 04:38:34.593830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:21:38.153 [2024-11-27 04:38:34.593837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:21:38.153 [2024-11-27 04:38:34.593843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:21:38.153 [2024-11-27 04:38:34.593851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:21:38.153 [2024-11-27 04:38:34.593857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:21:38.153 [2024-11-27 04:38:34.593864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:21:38.153 [2024-11-27 04:38:34.593870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:21:38.153 [2024-11-27 04:38:34.593879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:21:38.153 [2024-11-27 04:38:34.593885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:21:38.153 [2024-11-27 04:38:34.593892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:21:38.153 [2024-11-27 04:38:34.593905] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:21:38.153 [2024-11-27 04:38:34.593914] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: b290c8a9-93b3-40e8-884a-c7cbc275854f 00:21:38.153 [2024-11-27 04:38:34.593920] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:21:38.153 [2024-11-27 04:38:34.593927] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:21:38.153 [2024-11-27 04:38:34.593935] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:21:38.153 [2024-11-27 04:38:34.593942] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:21:38.153 [2024-11-27 04:38:34.593947] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:21:38.153 [2024-11-27 04:38:34.593955] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:21:38.153 [2024-11-27 04:38:34.593960] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:21:38.153 [2024-11-27 04:38:34.593967] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:21:38.153 [2024-11-27 04:38:34.593972] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:21:38.153 [2024-11-27 04:38:34.593978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:38.153 [2024-11-27 04:38:34.593985] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:21:38.153 [2024-11-27 04:38:34.593993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.815 ms 00:21:38.153 [2024-11-27 04:38:34.593999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.153 [2024-11-27 04:38:34.604031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:38.153 [2024-11-27 04:38:34.604178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:21:38.153 [2024-11-27 04:38:34.604198] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.005 ms 00:21:38.153 [2024-11-27 04:38:34.604204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.153 [2024-11-27 04:38:34.604536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:38.153 [2024-11-27 04:38:34.604545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:21:38.153 [2024-11-27 04:38:34.604553] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.264 ms 00:21:38.153 [2024-11-27 04:38:34.604559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.153 [2024-11-27 04:38:34.640096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:38.153 [2024-11-27 04:38:34.640140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:38.153 [2024-11-27 04:38:34.640153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:38.153 [2024-11-27 04:38:34.640160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.153 [2024-11-27 04:38:34.640263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:38.153 [2024-11-27 04:38:34.640271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:38.153 [2024-11-27 04:38:34.640279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:38.153 [2024-11-27 04:38:34.640286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.153 [2024-11-27 04:38:34.640338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:38.153 [2024-11-27 04:38:34.640348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:38.153 [2024-11-27 04:38:34.640358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:38.153 [2024-11-27 04:38:34.640364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.153 [2024-11-27 04:38:34.640391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:38.153 [2024-11-27 04:38:34.640398] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:38.153 [2024-11-27 04:38:34.640406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:38.153 [2024-11-27 04:38:34.640412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.153 [2024-11-27 04:38:34.706288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:38.153 [2024-11-27 04:38:34.706336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:38.153 [2024-11-27 04:38:34.706347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:38.153 [2024-11-27 04:38:34.706354] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.414 [2024-11-27 04:38:34.756131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:38.414 [2024-11-27 04:38:34.756180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:38.414 [2024-11-27 04:38:34.756192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:38.414 [2024-11-27 04:38:34.756198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.414 [2024-11-27 04:38:34.756283] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:38.414 [2024-11-27 04:38:34.756291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:38.414 [2024-11-27 04:38:34.756303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:38.414 [2024-11-27 04:38:34.756309] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.414 [2024-11-27 04:38:34.756348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:38.414 [2024-11-27 04:38:34.756355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:38.414 [2024-11-27 04:38:34.756362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:38.414 [2024-11-27 04:38:34.756367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.414 [2024-11-27 04:38:34.756458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:38.414 [2024-11-27 04:38:34.756466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:38.414 [2024-11-27 04:38:34.756473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:38.414 [2024-11-27 04:38:34.756481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.414 [2024-11-27 04:38:34.756524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:38.414 [2024-11-27 04:38:34.756531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:21:38.414 [2024-11-27 04:38:34.756539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:38.414 [2024-11-27 04:38:34.756546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.414 [2024-11-27 04:38:34.756589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:38.414 [2024-11-27 04:38:34.756595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:38.414 [2024-11-27 04:38:34.756604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:38.414 [2024-11-27 04:38:34.756611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.414 [2024-11-27 04:38:34.756659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:38.414 [2024-11-27 04:38:34.756667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:38.414 [2024-11-27 04:38:34.756675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:38.414 [2024-11-27 04:38:34.756681] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.414 [2024-11-27 04:38:34.756852] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 280.616 ms, result 0 00:21:38.414 true 00:21:38.414 04:38:34 ftl.ftl_trim -- ftl/trim.sh@63 -- # killprocess 76293 00:21:38.414 04:38:34 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 76293 ']' 00:21:38.415 04:38:34 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 76293 00:21:38.415 04:38:34 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:21:38.415 04:38:34 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:38.415 04:38:34 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76293 00:21:38.415 04:38:34 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:38.415 killing process with pid 76293 00:21:38.415 04:38:34 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:38.415 04:38:34 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76293' 00:21:38.415 04:38:34 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 76293 00:21:38.415 04:38:34 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 76293 00:21:48.550 04:38:44 ftl.ftl_trim -- ftl/trim.sh@66 -- # dd if=/dev/urandom bs=4K count=65536 00:21:49.491 65536+0 records in 00:21:49.491 65536+0 records out 00:21:49.491 268435456 bytes (268 MB, 256 MiB) copied, 1.06779 s, 251 MB/s 00:21:49.491 04:38:45 ftl.ftl_trim -- ftl/trim.sh@69 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:21:49.491 [2024-11-27 04:38:46.027804] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:21:49.491 [2024-11-27 04:38:46.027914] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76475 ] 00:21:49.752 [2024-11-27 04:38:46.188551] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:49.752 [2024-11-27 04:38:46.301301] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:50.013 [2024-11-27 04:38:46.583777] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:50.013 [2024-11-27 04:38:46.583844] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:50.275 [2024-11-27 04:38:46.738456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:50.276 [2024-11-27 04:38:46.738528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:50.276 [2024-11-27 04:38:46.738549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:21:50.276 [2024-11-27 04:38:46.738562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:50.276 [2024-11-27 04:38:46.742866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:50.276 [2024-11-27 04:38:46.742910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:50.276 [2024-11-27 04:38:46.742926] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.277 ms 00:21:50.276 [2024-11-27 04:38:46.742938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:50.276 [2024-11-27 04:38:46.743098] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:50.276 [2024-11-27 04:38:46.744217] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:50.276 [2024-11-27 04:38:46.744257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:50.276 [2024-11-27 04:38:46.744272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:50.276 [2024-11-27 04:38:46.744287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.168 ms 00:21:50.276 [2024-11-27 04:38:46.744300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:50.276 [2024-11-27 04:38:46.745605] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:21:50.276 [2024-11-27 04:38:46.762103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:50.276 [2024-11-27 04:38:46.762143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:21:50.276 [2024-11-27 04:38:46.762155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.499 ms 00:21:50.276 [2024-11-27 04:38:46.762163] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:50.276 [2024-11-27 04:38:46.762264] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:50.276 [2024-11-27 04:38:46.762275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:21:50.276 [2024-11-27 04:38:46.762284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:21:50.276 [2024-11-27 04:38:46.762291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:50.276 [2024-11-27 04:38:46.767137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:50.276 [2024-11-27 04:38:46.767172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:50.276 [2024-11-27 04:38:46.767182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.805 ms 00:21:50.276 [2024-11-27 04:38:46.767190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:50.276 [2024-11-27 04:38:46.767280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:50.276 [2024-11-27 04:38:46.767290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:50.276 [2024-11-27 04:38:46.767298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:21:50.276 [2024-11-27 04:38:46.767305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:50.276 [2024-11-27 04:38:46.767332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:50.276 [2024-11-27 04:38:46.767341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:50.276 [2024-11-27 04:38:46.767349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:21:50.276 [2024-11-27 04:38:46.767356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:50.276 [2024-11-27 04:38:46.767376] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:21:50.276 [2024-11-27 04:38:46.770732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:50.276 [2024-11-27 04:38:46.770761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:50.276 [2024-11-27 04:38:46.770770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.362 ms 00:21:50.276 [2024-11-27 04:38:46.770777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:50.276 [2024-11-27 04:38:46.770811] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:50.276 [2024-11-27 04:38:46.770820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:50.276 [2024-11-27 04:38:46.770828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:21:50.276 [2024-11-27 04:38:46.770835] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:50.276 [2024-11-27 04:38:46.770855] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:21:50.276 [2024-11-27 04:38:46.770871] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:21:50.276 [2024-11-27 04:38:46.770904] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:21:50.276 [2024-11-27 04:38:46.770919] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:21:50.276 [2024-11-27 04:38:46.771020] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:21:50.276 [2024-11-27 04:38:46.771030] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:50.276 [2024-11-27 04:38:46.771039] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:21:50.276 [2024-11-27 04:38:46.771051] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:50.276 [2024-11-27 04:38:46.771060] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:50.276 [2024-11-27 04:38:46.771068] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:21:50.276 [2024-11-27 04:38:46.771075] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:50.276 [2024-11-27 04:38:46.771083] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:21:50.276 [2024-11-27 04:38:46.771089] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:21:50.276 [2024-11-27 04:38:46.771097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:50.276 [2024-11-27 04:38:46.771104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:50.276 [2024-11-27 04:38:46.771112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.244 ms 00:21:50.276 [2024-11-27 04:38:46.771119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:50.276 [2024-11-27 04:38:46.771205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:50.276 [2024-11-27 04:38:46.771215] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:50.276 [2024-11-27 04:38:46.771223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:21:50.276 [2024-11-27 04:38:46.771230] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:50.276 [2024-11-27 04:38:46.771345] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:50.276 [2024-11-27 04:38:46.771355] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:50.276 [2024-11-27 04:38:46.771363] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:50.276 [2024-11-27 04:38:46.771371] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:50.276 [2024-11-27 04:38:46.771379] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:50.276 [2024-11-27 04:38:46.771385] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:50.276 [2024-11-27 04:38:46.771392] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:21:50.276 [2024-11-27 04:38:46.771399] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:50.276 [2024-11-27 04:38:46.771406] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:21:50.276 [2024-11-27 04:38:46.771412] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:50.276 [2024-11-27 04:38:46.771419] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:50.276 [2024-11-27 04:38:46.771435] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:21:50.276 [2024-11-27 04:38:46.771442] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:50.276 [2024-11-27 04:38:46.771448] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:50.276 [2024-11-27 04:38:46.771455] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:21:50.276 [2024-11-27 04:38:46.771461] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:50.276 [2024-11-27 04:38:46.771468] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:50.276 [2024-11-27 04:38:46.771477] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:21:50.276 [2024-11-27 04:38:46.771483] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:50.276 [2024-11-27 04:38:46.771490] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:50.276 [2024-11-27 04:38:46.771497] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:21:50.276 [2024-11-27 04:38:46.771503] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:50.276 [2024-11-27 04:38:46.771510] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:50.276 [2024-11-27 04:38:46.771517] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:21:50.276 [2024-11-27 04:38:46.771523] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:50.276 [2024-11-27 04:38:46.771529] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:50.276 [2024-11-27 04:38:46.771536] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:21:50.276 [2024-11-27 04:38:46.771542] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:50.277 [2024-11-27 04:38:46.771548] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:50.277 [2024-11-27 04:38:46.771554] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:21:50.277 [2024-11-27 04:38:46.771561] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:50.277 [2024-11-27 04:38:46.771567] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:50.277 [2024-11-27 04:38:46.771573] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:21:50.277 [2024-11-27 04:38:46.771580] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:50.277 [2024-11-27 04:38:46.771587] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:50.277 [2024-11-27 04:38:46.771593] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:21:50.277 [2024-11-27 04:38:46.771599] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:50.277 [2024-11-27 04:38:46.771606] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:21:50.277 [2024-11-27 04:38:46.771612] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:21:50.277 [2024-11-27 04:38:46.771618] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:50.277 [2024-11-27 04:38:46.771624] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:21:50.277 [2024-11-27 04:38:46.771631] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:21:50.277 [2024-11-27 04:38:46.771638] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:50.277 [2024-11-27 04:38:46.771644] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:50.277 [2024-11-27 04:38:46.771651] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:50.277 [2024-11-27 04:38:46.771660] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:50.277 [2024-11-27 04:38:46.771667] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:50.277 [2024-11-27 04:38:46.771675] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:50.277 [2024-11-27 04:38:46.771681] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:50.277 [2024-11-27 04:38:46.771688] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:50.277 [2024-11-27 04:38:46.771695] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:50.277 [2024-11-27 04:38:46.771701] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:50.277 [2024-11-27 04:38:46.771707] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:50.277 [2024-11-27 04:38:46.771715] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:50.277 [2024-11-27 04:38:46.771742] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:50.277 [2024-11-27 04:38:46.771751] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:21:50.277 [2024-11-27 04:38:46.771758] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:21:50.277 [2024-11-27 04:38:46.771765] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:21:50.277 [2024-11-27 04:38:46.771773] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:21:50.277 [2024-11-27 04:38:46.771780] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:21:50.277 [2024-11-27 04:38:46.771787] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:21:50.277 [2024-11-27 04:38:46.771794] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:21:50.277 [2024-11-27 04:38:46.771805] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:21:50.277 [2024-11-27 04:38:46.771812] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:21:50.277 [2024-11-27 04:38:46.771819] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:21:50.277 [2024-11-27 04:38:46.771826] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:21:50.277 [2024-11-27 04:38:46.771833] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:21:50.277 [2024-11-27 04:38:46.771840] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:21:50.277 [2024-11-27 04:38:46.771846] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:21:50.277 [2024-11-27 04:38:46.771854] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:50.277 [2024-11-27 04:38:46.771862] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:50.277 [2024-11-27 04:38:46.771870] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:50.277 [2024-11-27 04:38:46.771877] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:50.277 [2024-11-27 04:38:46.771884] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:50.277 [2024-11-27 04:38:46.771891] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:50.277 [2024-11-27 04:38:46.771899] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:50.277 [2024-11-27 04:38:46.771910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:50.277 [2024-11-27 04:38:46.771917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.623 ms 00:21:50.277 [2024-11-27 04:38:46.771924] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:50.277 [2024-11-27 04:38:46.800258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:50.277 [2024-11-27 04:38:46.800318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:50.277 [2024-11-27 04:38:46.800337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.280 ms 00:21:50.277 [2024-11-27 04:38:46.800350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:50.277 [2024-11-27 04:38:46.800539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:50.277 [2024-11-27 04:38:46.800555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:50.277 [2024-11-27 04:38:46.800568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.085 ms 00:21:50.277 [2024-11-27 04:38:46.800580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:50.277 [2024-11-27 04:38:46.845407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:50.277 [2024-11-27 04:38:46.845457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:50.277 [2024-11-27 04:38:46.845473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.795 ms 00:21:50.277 [2024-11-27 04:38:46.845481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:50.277 [2024-11-27 04:38:46.845584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:50.277 [2024-11-27 04:38:46.845594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:50.277 [2024-11-27 04:38:46.845603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:21:50.277 [2024-11-27 04:38:46.845610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:50.277 [2024-11-27 04:38:46.845949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:50.277 [2024-11-27 04:38:46.845964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:50.277 [2024-11-27 04:38:46.845979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.317 ms 00:21:50.277 [2024-11-27 04:38:46.845986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:50.277 [2024-11-27 04:38:46.846110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:50.277 [2024-11-27 04:38:46.846172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:50.277 [2024-11-27 04:38:46.846183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.100 ms 00:21:50.277 [2024-11-27 04:38:46.846191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:50.539 [2024-11-27 04:38:46.859291] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:50.539 [2024-11-27 04:38:46.859320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:50.539 [2024-11-27 04:38:46.859330] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.079 ms 00:21:50.539 [2024-11-27 04:38:46.859338] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:50.539 [2024-11-27 04:38:46.871773] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:21:50.539 [2024-11-27 04:38:46.871810] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:21:50.539 [2024-11-27 04:38:46.871822] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:50.539 [2024-11-27 04:38:46.871830] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:21:50.539 [2024-11-27 04:38:46.871839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.386 ms 00:21:50.539 [2024-11-27 04:38:46.871846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:50.539 [2024-11-27 04:38:46.896478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:50.539 [2024-11-27 04:38:46.896532] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:21:50.539 [2024-11-27 04:38:46.896545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.549 ms 00:21:50.539 [2024-11-27 04:38:46.896554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:50.539 [2024-11-27 04:38:46.908353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:50.539 [2024-11-27 04:38:46.908493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:21:50.539 [2024-11-27 04:38:46.908510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.717 ms 00:21:50.539 [2024-11-27 04:38:46.908517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:50.539 [2024-11-27 04:38:46.919606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:50.539 [2024-11-27 04:38:46.919735] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:21:50.539 [2024-11-27 04:38:46.919751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.020 ms 00:21:50.539 [2024-11-27 04:38:46.919758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:50.539 [2024-11-27 04:38:46.920357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:50.539 [2024-11-27 04:38:46.920378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:50.539 [2024-11-27 04:38:46.920387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.510 ms 00:21:50.539 [2024-11-27 04:38:46.920395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:50.539 [2024-11-27 04:38:46.991510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:50.539 [2024-11-27 04:38:46.991715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:21:50.539 [2024-11-27 04:38:46.991756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 71.091 ms 00:21:50.539 [2024-11-27 04:38:46.991765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:50.539 [2024-11-27 04:38:47.002659] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:21:50.539 [2024-11-27 04:38:47.017135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:50.539 [2024-11-27 04:38:47.017176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:50.539 [2024-11-27 04:38:47.017190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.258 ms 00:21:50.539 [2024-11-27 04:38:47.017199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:50.539 [2024-11-27 04:38:47.017300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:50.539 [2024-11-27 04:38:47.017312] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:21:50.539 [2024-11-27 04:38:47.017321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:21:50.539 [2024-11-27 04:38:47.017328] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:50.539 [2024-11-27 04:38:47.017375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:50.539 [2024-11-27 04:38:47.017384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:50.539 [2024-11-27 04:38:47.017392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:21:50.539 [2024-11-27 04:38:47.017399] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:50.539 [2024-11-27 04:38:47.017428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:50.539 [2024-11-27 04:38:47.017438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:50.539 [2024-11-27 04:38:47.017446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:21:50.539 [2024-11-27 04:38:47.017453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:50.539 [2024-11-27 04:38:47.017483] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:21:50.539 [2024-11-27 04:38:47.017493] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:50.540 [2024-11-27 04:38:47.017500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:21:50.540 [2024-11-27 04:38:47.017508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:21:50.540 [2024-11-27 04:38:47.017515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:50.540 [2024-11-27 04:38:47.041052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:50.540 [2024-11-27 04:38:47.041092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:50.540 [2024-11-27 04:38:47.041104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.516 ms 00:21:50.540 [2024-11-27 04:38:47.041113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:50.540 [2024-11-27 04:38:47.041213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:50.540 [2024-11-27 04:38:47.041224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:50.540 [2024-11-27 04:38:47.041233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:21:50.540 [2024-11-27 04:38:47.041241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:50.540 [2024-11-27 04:38:47.042021] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:50.540 [2024-11-27 04:38:47.045213] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 303.304 ms, result 0 00:21:50.540 [2024-11-27 04:38:47.045830] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:50.540 [2024-11-27 04:38:47.058610] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:51.482  [2024-11-27T04:38:49.440Z] Copying: 41/256 [MB] (41 MBps) [2024-11-27T04:38:50.373Z] Copying: 84/256 [MB] (43 MBps) [2024-11-27T04:38:51.308Z] Copying: 130/256 [MB] (46 MBps) [2024-11-27T04:38:52.322Z] Copying: 175/256 [MB] (45 MBps) [2024-11-27T04:38:53.261Z] Copying: 218/256 [MB] (43 MBps) [2024-11-27T04:38:53.261Z] Copying: 256/256 [MB] (average 43 MBps)[2024-11-27 04:38:52.920711] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:56.674 [2024-11-27 04:38:52.929734] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:56.674 [2024-11-27 04:38:52.929768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:21:56.674 [2024-11-27 04:38:52.929781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:56.674 [2024-11-27 04:38:52.929794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.674 [2024-11-27 04:38:52.929815] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:21:56.674 [2024-11-27 04:38:52.932486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:56.674 [2024-11-27 04:38:52.932567] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:21:56.674 [2024-11-27 04:38:52.932656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.658 ms 00:21:56.674 [2024-11-27 04:38:52.932681] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.674 [2024-11-27 04:38:52.934309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:56.674 [2024-11-27 04:38:52.934402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:21:56.674 [2024-11-27 04:38:52.934465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.544 ms 00:21:56.674 [2024-11-27 04:38:52.934488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.674 [2024-11-27 04:38:52.940799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:56.674 [2024-11-27 04:38:52.940914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:21:56.674 [2024-11-27 04:38:52.940969] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.280 ms 00:21:56.674 [2024-11-27 04:38:52.940992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.674 [2024-11-27 04:38:52.947902] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:56.674 [2024-11-27 04:38:52.948010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:21:56.674 [2024-11-27 04:38:52.948083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.860 ms 00:21:56.674 [2024-11-27 04:38:52.948109] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.674 [2024-11-27 04:38:52.970552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:56.674 [2024-11-27 04:38:52.970694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:21:56.674 [2024-11-27 04:38:52.970787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.390 ms 00:21:56.674 [2024-11-27 04:38:52.970843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.674 [2024-11-27 04:38:52.984678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:56.674 [2024-11-27 04:38:52.984792] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:21:56.674 [2024-11-27 04:38:52.984853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.791 ms 00:21:56.674 [2024-11-27 04:38:52.984893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.674 [2024-11-27 04:38:52.985069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:56.674 [2024-11-27 04:38:52.985101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:21:56.674 [2024-11-27 04:38:52.985162] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.085 ms 00:21:56.674 [2024-11-27 04:38:52.985194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.674 [2024-11-27 04:38:53.007240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:56.674 [2024-11-27 04:38:53.007340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:21:56.674 [2024-11-27 04:38:53.007387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.017 ms 00:21:56.674 [2024-11-27 04:38:53.007408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.674 [2024-11-27 04:38:53.029442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:56.674 [2024-11-27 04:38:53.029551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:21:56.674 [2024-11-27 04:38:53.029602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.993 ms 00:21:56.674 [2024-11-27 04:38:53.029624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.674 [2024-11-27 04:38:53.051803] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:56.674 [2024-11-27 04:38:53.051912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:21:56.674 [2024-11-27 04:38:53.051957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.138 ms 00:21:56.674 [2024-11-27 04:38:53.051979] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.674 [2024-11-27 04:38:53.073988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:56.674 [2024-11-27 04:38:53.074100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:21:56.674 [2024-11-27 04:38:53.074151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.940 ms 00:21:56.674 [2024-11-27 04:38:53.074173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.674 [2024-11-27 04:38:53.074212] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:21:56.674 [2024-11-27 04:38:53.074239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:21:56.674 [2024-11-27 04:38:53.074292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:21:56.674 [2024-11-27 04:38:53.074324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:21:56.674 [2024-11-27 04:38:53.074353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:56.674 [2024-11-27 04:38:53.074407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:56.674 [2024-11-27 04:38:53.074440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:56.674 [2024-11-27 04:38:53.074468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:56.674 [2024-11-27 04:38:53.074496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:56.674 [2024-11-27 04:38:53.074562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:56.674 [2024-11-27 04:38:53.074592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:56.674 [2024-11-27 04:38:53.074620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:56.674 [2024-11-27 04:38:53.074649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:56.674 [2024-11-27 04:38:53.074704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:56.675 [2024-11-27 04:38:53.074715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:56.675 [2024-11-27 04:38:53.074740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:56.675 [2024-11-27 04:38:53.074749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:56.675 [2024-11-27 04:38:53.074757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:56.675 [2024-11-27 04:38:53.074764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:56.675 [2024-11-27 04:38:53.074773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:21:56.675 [2024-11-27 04:38:53.074780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:21:56.675 [2024-11-27 04:38:53.074787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:21:56.675 [2024-11-27 04:38:53.074795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:21:56.675 [2024-11-27 04:38:53.074802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:21:56.675 [2024-11-27 04:38:53.074810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:21:56.675 [2024-11-27 04:38:53.074817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:21:56.675 [2024-11-27 04:38:53.074825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:21:56.675 [2024-11-27 04:38:53.074832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:21:56.675 [2024-11-27 04:38:53.074839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:21:56.675 [2024-11-27 04:38:53.074846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:21:56.675 [2024-11-27 04:38:53.074854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:21:56.675 [2024-11-27 04:38:53.074861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:21:56.675 [2024-11-27 04:38:53.074868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:21:56.675 [2024-11-27 04:38:53.074876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:21:56.675 [2024-11-27 04:38:53.074883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:21:56.675 [2024-11-27 04:38:53.074891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:21:56.675 [2024-11-27 04:38:53.074898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:21:56.675 [2024-11-27 04:38:53.074905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:21:56.675 [2024-11-27 04:38:53.074912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:21:56.675 [2024-11-27 04:38:53.074919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:21:56.675 [2024-11-27 04:38:53.074927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:21:56.675 [2024-11-27 04:38:53.074935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:21:56.675 [2024-11-27 04:38:53.074942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:21:56.675 [2024-11-27 04:38:53.074949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:21:56.675 [2024-11-27 04:38:53.074957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:21:56.675 [2024-11-27 04:38:53.074964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:21:56.675 [2024-11-27 04:38:53.074971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:21:56.675 [2024-11-27 04:38:53.074979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:21:56.675 [2024-11-27 04:38:53.074986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:21:56.675 [2024-11-27 04:38:53.074993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:21:56.675 [2024-11-27 04:38:53.075000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:21:56.675 [2024-11-27 04:38:53.075008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:21:56.675 [2024-11-27 04:38:53.075016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:21:56.675 [2024-11-27 04:38:53.075024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:21:56.675 [2024-11-27 04:38:53.075032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:21:56.675 [2024-11-27 04:38:53.075039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:21:56.675 [2024-11-27 04:38:53.075047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:21:56.675 [2024-11-27 04:38:53.075054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:21:56.675 [2024-11-27 04:38:53.075061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:21:56.675 [2024-11-27 04:38:53.075068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:21:56.675 [2024-11-27 04:38:53.075075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:21:56.675 [2024-11-27 04:38:53.075082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:21:56.675 [2024-11-27 04:38:53.075089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:21:56.675 [2024-11-27 04:38:53.075096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:21:56.675 [2024-11-27 04:38:53.075103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:21:56.675 [2024-11-27 04:38:53.075110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:21:56.675 [2024-11-27 04:38:53.075118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:21:56.675 [2024-11-27 04:38:53.075125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:21:56.675 [2024-11-27 04:38:53.075132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:21:56.675 [2024-11-27 04:38:53.075139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:21:56.675 [2024-11-27 04:38:53.075146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:21:56.675 [2024-11-27 04:38:53.075153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:21:56.675 [2024-11-27 04:38:53.075160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:21:56.675 [2024-11-27 04:38:53.075167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:21:56.675 [2024-11-27 04:38:53.075175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:21:56.675 [2024-11-27 04:38:53.075182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:21:56.675 [2024-11-27 04:38:53.075189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:21:56.675 [2024-11-27 04:38:53.075196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:21:56.675 [2024-11-27 04:38:53.075203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:21:56.675 [2024-11-27 04:38:53.075210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:21:56.675 [2024-11-27 04:38:53.075218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:21:56.675 [2024-11-27 04:38:53.075225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:21:56.675 [2024-11-27 04:38:53.075232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:21:56.675 [2024-11-27 04:38:53.075239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:21:56.675 [2024-11-27 04:38:53.075246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:21:56.675 [2024-11-27 04:38:53.075255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:21:56.675 [2024-11-27 04:38:53.075262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:21:56.675 [2024-11-27 04:38:53.075269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:21:56.675 [2024-11-27 04:38:53.075277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:21:56.675 [2024-11-27 04:38:53.075284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:21:56.675 [2024-11-27 04:38:53.075291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:21:56.675 [2024-11-27 04:38:53.075298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:21:56.675 [2024-11-27 04:38:53.075306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:21:56.675 [2024-11-27 04:38:53.075313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:21:56.676 [2024-11-27 04:38:53.075320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:21:56.676 [2024-11-27 04:38:53.075350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:21:56.676 [2024-11-27 04:38:53.075357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:21:56.676 [2024-11-27 04:38:53.075364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:21:56.676 [2024-11-27 04:38:53.075372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:21:56.676 [2024-11-27 04:38:53.075379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:21:56.676 [2024-11-27 04:38:53.075387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:21:56.676 [2024-11-27 04:38:53.075403] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:21:56.676 [2024-11-27 04:38:53.075411] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: b290c8a9-93b3-40e8-884a-c7cbc275854f 00:21:56.676 [2024-11-27 04:38:53.075419] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:21:56.676 [2024-11-27 04:38:53.075426] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:21:56.676 [2024-11-27 04:38:53.075433] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:21:56.676 [2024-11-27 04:38:53.075440] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:21:56.676 [2024-11-27 04:38:53.075446] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:21:56.676 [2024-11-27 04:38:53.075454] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:21:56.676 [2024-11-27 04:38:53.075460] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:21:56.676 [2024-11-27 04:38:53.075467] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:21:56.676 [2024-11-27 04:38:53.075473] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:21:56.676 [2024-11-27 04:38:53.075480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:56.676 [2024-11-27 04:38:53.075489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:21:56.676 [2024-11-27 04:38:53.075497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.268 ms 00:21:56.676 [2024-11-27 04:38:53.075504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.676 [2024-11-27 04:38:53.087592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:56.676 [2024-11-27 04:38:53.087622] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:21:56.676 [2024-11-27 04:38:53.087632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.057 ms 00:21:56.676 [2024-11-27 04:38:53.087640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.676 [2024-11-27 04:38:53.088010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:56.676 [2024-11-27 04:38:53.088028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:21:56.676 [2024-11-27 04:38:53.088037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.337 ms 00:21:56.676 [2024-11-27 04:38:53.088044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.676 [2024-11-27 04:38:53.122480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:56.676 [2024-11-27 04:38:53.122532] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:56.676 [2024-11-27 04:38:53.122543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:56.676 [2024-11-27 04:38:53.122551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.676 [2024-11-27 04:38:53.122645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:56.676 [2024-11-27 04:38:53.122655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:56.676 [2024-11-27 04:38:53.122662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:56.676 [2024-11-27 04:38:53.122670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.676 [2024-11-27 04:38:53.122713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:56.676 [2024-11-27 04:38:53.122738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:56.676 [2024-11-27 04:38:53.122747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:56.676 [2024-11-27 04:38:53.122754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.676 [2024-11-27 04:38:53.122772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:56.676 [2024-11-27 04:38:53.122782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:56.676 [2024-11-27 04:38:53.122790] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:56.676 [2024-11-27 04:38:53.122797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.676 [2024-11-27 04:38:53.199142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:56.676 [2024-11-27 04:38:53.199179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:56.676 [2024-11-27 04:38:53.199190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:56.676 [2024-11-27 04:38:53.199198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.984 [2024-11-27 04:38:53.262105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:56.984 [2024-11-27 04:38:53.262151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:56.984 [2024-11-27 04:38:53.262161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:56.984 [2024-11-27 04:38:53.262169] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.984 [2024-11-27 04:38:53.262233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:56.984 [2024-11-27 04:38:53.262241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:56.984 [2024-11-27 04:38:53.262249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:56.984 [2024-11-27 04:38:53.262257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.984 [2024-11-27 04:38:53.262283] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:56.984 [2024-11-27 04:38:53.262291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:56.984 [2024-11-27 04:38:53.262302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:56.984 [2024-11-27 04:38:53.262310] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.984 [2024-11-27 04:38:53.262391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:56.984 [2024-11-27 04:38:53.262399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:56.984 [2024-11-27 04:38:53.262407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:56.984 [2024-11-27 04:38:53.262414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.984 [2024-11-27 04:38:53.262443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:56.984 [2024-11-27 04:38:53.262452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:21:56.984 [2024-11-27 04:38:53.262459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:56.984 [2024-11-27 04:38:53.262469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.984 [2024-11-27 04:38:53.262503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:56.984 [2024-11-27 04:38:53.262511] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:56.984 [2024-11-27 04:38:53.262519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:56.984 [2024-11-27 04:38:53.262526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.984 [2024-11-27 04:38:53.262564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:56.984 [2024-11-27 04:38:53.262574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:56.984 [2024-11-27 04:38:53.262584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:56.984 [2024-11-27 04:38:53.262591] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.984 [2024-11-27 04:38:53.262720] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 332.985 ms, result 0 00:21:57.923 00:21:57.923 00:21:57.923 04:38:54 ftl.ftl_trim -- ftl/trim.sh@72 -- # svcpid=76572 00:21:57.923 04:38:54 ftl.ftl_trim -- ftl/trim.sh@73 -- # waitforlisten 76572 00:21:57.923 04:38:54 ftl.ftl_trim -- ftl/trim.sh@71 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:21:57.923 04:38:54 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 76572 ']' 00:21:57.923 04:38:54 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:57.923 04:38:54 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:57.923 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:57.923 04:38:54 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:57.923 04:38:54 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:57.923 04:38:54 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:21:57.923 [2024-11-27 04:38:54.397855] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:21:57.923 [2024-11-27 04:38:54.397966] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76572 ] 00:21:58.191 [2024-11-27 04:38:54.560471] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:58.191 [2024-11-27 04:38:54.660264] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:58.774 04:38:55 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:58.774 04:38:55 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:21:58.774 04:38:55 ftl.ftl_trim -- ftl/trim.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:21:59.033 [2024-11-27 04:38:55.504374] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:59.033 [2024-11-27 04:38:55.504439] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:59.293 [2024-11-27 04:38:55.654628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:59.293 [2024-11-27 04:38:55.654827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:59.293 [2024-11-27 04:38:55.654850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:59.293 [2024-11-27 04:38:55.654859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.293 [2024-11-27 04:38:55.657480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:59.293 [2024-11-27 04:38:55.657515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:59.293 [2024-11-27 04:38:55.657526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.599 ms 00:21:59.293 [2024-11-27 04:38:55.657534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.293 [2024-11-27 04:38:55.657634] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:59.293 [2024-11-27 04:38:55.658367] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:59.293 [2024-11-27 04:38:55.658388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:59.293 [2024-11-27 04:38:55.658396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:59.293 [2024-11-27 04:38:55.658407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.763 ms 00:21:59.293 [2024-11-27 04:38:55.658416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.293 [2024-11-27 04:38:55.659456] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:21:59.293 [2024-11-27 04:38:55.671638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:59.293 [2024-11-27 04:38:55.671674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:21:59.293 [2024-11-27 04:38:55.671685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.187 ms 00:21:59.293 [2024-11-27 04:38:55.671695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.293 [2024-11-27 04:38:55.671787] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:59.293 [2024-11-27 04:38:55.671800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:21:59.293 [2024-11-27 04:38:55.671808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:21:59.293 [2024-11-27 04:38:55.671818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.293 [2024-11-27 04:38:55.676349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:59.293 [2024-11-27 04:38:55.676387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:59.293 [2024-11-27 04:38:55.676396] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.483 ms 00:21:59.293 [2024-11-27 04:38:55.676405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.293 [2024-11-27 04:38:55.676497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:59.293 [2024-11-27 04:38:55.676509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:59.293 [2024-11-27 04:38:55.676517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:21:59.293 [2024-11-27 04:38:55.676529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.293 [2024-11-27 04:38:55.676551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:59.293 [2024-11-27 04:38:55.676561] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:59.293 [2024-11-27 04:38:55.676568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:21:59.293 [2024-11-27 04:38:55.676577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.293 [2024-11-27 04:38:55.676599] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:21:59.293 [2024-11-27 04:38:55.679785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:59.293 [2024-11-27 04:38:55.679912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:59.293 [2024-11-27 04:38:55.679930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.189 ms 00:21:59.293 [2024-11-27 04:38:55.679938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.293 [2024-11-27 04:38:55.679974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:59.293 [2024-11-27 04:38:55.679982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:59.293 [2024-11-27 04:38:55.679994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:21:59.293 [2024-11-27 04:38:55.680001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.293 [2024-11-27 04:38:55.680022] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:21:59.293 [2024-11-27 04:38:55.680039] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:21:59.293 [2024-11-27 04:38:55.680077] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:21:59.293 [2024-11-27 04:38:55.680092] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:21:59.293 [2024-11-27 04:38:55.680194] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:21:59.293 [2024-11-27 04:38:55.680204] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:59.293 [2024-11-27 04:38:55.680220] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:21:59.293 [2024-11-27 04:38:55.680230] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:59.293 [2024-11-27 04:38:55.680241] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:59.293 [2024-11-27 04:38:55.680249] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:21:59.293 [2024-11-27 04:38:55.680257] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:59.293 [2024-11-27 04:38:55.680264] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:21:59.293 [2024-11-27 04:38:55.680275] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:21:59.293 [2024-11-27 04:38:55.680282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:59.293 [2024-11-27 04:38:55.680290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:59.293 [2024-11-27 04:38:55.680299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.263 ms 00:21:59.293 [2024-11-27 04:38:55.680309] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.293 [2024-11-27 04:38:55.680395] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:59.293 [2024-11-27 04:38:55.680405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:59.293 [2024-11-27 04:38:55.680412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:21:59.293 [2024-11-27 04:38:55.680420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.293 [2024-11-27 04:38:55.680529] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:59.293 [2024-11-27 04:38:55.680541] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:59.293 [2024-11-27 04:38:55.680548] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:59.293 [2024-11-27 04:38:55.680558] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:59.293 [2024-11-27 04:38:55.680565] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:59.293 [2024-11-27 04:38:55.680573] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:59.293 [2024-11-27 04:38:55.680580] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:21:59.293 [2024-11-27 04:38:55.680591] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:59.293 [2024-11-27 04:38:55.680599] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:21:59.293 [2024-11-27 04:38:55.680608] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:59.293 [2024-11-27 04:38:55.680614] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:59.293 [2024-11-27 04:38:55.680622] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:21:59.293 [2024-11-27 04:38:55.680628] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:59.293 [2024-11-27 04:38:55.680637] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:59.293 [2024-11-27 04:38:55.680643] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:21:59.293 [2024-11-27 04:38:55.680651] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:59.293 [2024-11-27 04:38:55.680658] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:59.293 [2024-11-27 04:38:55.680666] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:21:59.293 [2024-11-27 04:38:55.680678] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:59.293 [2024-11-27 04:38:55.680686] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:59.293 [2024-11-27 04:38:55.680693] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:21:59.293 [2024-11-27 04:38:55.680701] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:59.293 [2024-11-27 04:38:55.680708] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:59.293 [2024-11-27 04:38:55.680717] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:21:59.293 [2024-11-27 04:38:55.680749] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:59.293 [2024-11-27 04:38:55.680763] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:59.293 [2024-11-27 04:38:55.680770] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:21:59.293 [2024-11-27 04:38:55.680778] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:59.293 [2024-11-27 04:38:55.680784] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:59.293 [2024-11-27 04:38:55.680793] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:21:59.293 [2024-11-27 04:38:55.680799] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:59.293 [2024-11-27 04:38:55.680807] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:59.293 [2024-11-27 04:38:55.680813] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:21:59.293 [2024-11-27 04:38:55.680840] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:59.293 [2024-11-27 04:38:55.680847] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:59.293 [2024-11-27 04:38:55.680856] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:21:59.293 [2024-11-27 04:38:55.680862] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:59.293 [2024-11-27 04:38:55.680870] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:21:59.293 [2024-11-27 04:38:55.680876] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:21:59.293 [2024-11-27 04:38:55.680885] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:59.293 [2024-11-27 04:38:55.680892] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:21:59.293 [2024-11-27 04:38:55.680900] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:21:59.293 [2024-11-27 04:38:55.680906] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:59.293 [2024-11-27 04:38:55.680915] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:59.293 [2024-11-27 04:38:55.680924] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:59.293 [2024-11-27 04:38:55.680933] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:59.293 [2024-11-27 04:38:55.680939] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:59.293 [2024-11-27 04:38:55.680948] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:59.293 [2024-11-27 04:38:55.680955] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:59.293 [2024-11-27 04:38:55.680963] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:59.293 [2024-11-27 04:38:55.680970] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:59.293 [2024-11-27 04:38:55.680978] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:59.293 [2024-11-27 04:38:55.680985] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:59.293 [2024-11-27 04:38:55.680995] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:59.293 [2024-11-27 04:38:55.681004] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:59.293 [2024-11-27 04:38:55.681016] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:21:59.293 [2024-11-27 04:38:55.681023] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:21:59.293 [2024-11-27 04:38:55.681033] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:21:59.293 [2024-11-27 04:38:55.681040] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:21:59.293 [2024-11-27 04:38:55.681048] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:21:59.293 [2024-11-27 04:38:55.681055] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:21:59.293 [2024-11-27 04:38:55.681064] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:21:59.293 [2024-11-27 04:38:55.681070] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:21:59.293 [2024-11-27 04:38:55.681079] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:21:59.293 [2024-11-27 04:38:55.681085] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:21:59.293 [2024-11-27 04:38:55.681094] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:21:59.293 [2024-11-27 04:38:55.681101] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:21:59.293 [2024-11-27 04:38:55.681110] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:21:59.293 [2024-11-27 04:38:55.681117] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:21:59.293 [2024-11-27 04:38:55.681125] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:59.293 [2024-11-27 04:38:55.681133] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:59.293 [2024-11-27 04:38:55.681144] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:59.293 [2024-11-27 04:38:55.681151] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:59.293 [2024-11-27 04:38:55.681160] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:59.293 [2024-11-27 04:38:55.681167] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:59.293 [2024-11-27 04:38:55.681176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:59.293 [2024-11-27 04:38:55.681183] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:59.293 [2024-11-27 04:38:55.681191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.713 ms 00:21:59.293 [2024-11-27 04:38:55.681200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.293 [2024-11-27 04:38:55.706590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:59.293 [2024-11-27 04:38:55.706624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:59.293 [2024-11-27 04:38:55.706635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.332 ms 00:21:59.293 [2024-11-27 04:38:55.706644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.293 [2024-11-27 04:38:55.706779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:59.293 [2024-11-27 04:38:55.706790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:59.293 [2024-11-27 04:38:55.706800] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.079 ms 00:21:59.293 [2024-11-27 04:38:55.706807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.293 [2024-11-27 04:38:55.736614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:59.293 [2024-11-27 04:38:55.736647] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:59.293 [2024-11-27 04:38:55.736658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.784 ms 00:21:59.293 [2024-11-27 04:38:55.736665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.294 [2024-11-27 04:38:55.736735] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:59.294 [2024-11-27 04:38:55.736745] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:59.294 [2024-11-27 04:38:55.736755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:21:59.294 [2024-11-27 04:38:55.736762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.294 [2024-11-27 04:38:55.737074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:59.294 [2024-11-27 04:38:55.737121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:59.294 [2024-11-27 04:38:55.737133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.290 ms 00:21:59.294 [2024-11-27 04:38:55.737141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.294 [2024-11-27 04:38:55.737260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:59.294 [2024-11-27 04:38:55.737268] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:59.294 [2024-11-27 04:38:55.737277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.098 ms 00:21:59.294 [2024-11-27 04:38:55.737285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.294 [2024-11-27 04:38:55.751192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:59.294 [2024-11-27 04:38:55.751220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:59.294 [2024-11-27 04:38:55.751231] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.885 ms 00:21:59.294 [2024-11-27 04:38:55.751239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.294 [2024-11-27 04:38:55.779122] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:21:59.294 [2024-11-27 04:38:55.779160] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:21:59.294 [2024-11-27 04:38:55.779177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:59.294 [2024-11-27 04:38:55.779186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:21:59.294 [2024-11-27 04:38:55.779197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.830 ms 00:21:59.294 [2024-11-27 04:38:55.779209] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.294 [2024-11-27 04:38:55.803399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:59.294 [2024-11-27 04:38:55.803433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:21:59.294 [2024-11-27 04:38:55.803446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.118 ms 00:21:59.294 [2024-11-27 04:38:55.803456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.294 [2024-11-27 04:38:55.814989] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:59.294 [2024-11-27 04:38:55.815020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:21:59.294 [2024-11-27 04:38:55.815033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.465 ms 00:21:59.294 [2024-11-27 04:38:55.815040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.294 [2024-11-27 04:38:55.826170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:59.294 [2024-11-27 04:38:55.826198] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:21:59.294 [2024-11-27 04:38:55.826210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.068 ms 00:21:59.294 [2024-11-27 04:38:55.826217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.294 [2024-11-27 04:38:55.826837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:59.294 [2024-11-27 04:38:55.826860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:59.294 [2024-11-27 04:38:55.826871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.530 ms 00:21:59.294 [2024-11-27 04:38:55.826878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.551 [2024-11-27 04:38:55.880683] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:59.551 [2024-11-27 04:38:55.880750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:21:59.551 [2024-11-27 04:38:55.880766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 53.781 ms 00:21:59.551 [2024-11-27 04:38:55.880774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.551 [2024-11-27 04:38:55.891015] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:21:59.551 [2024-11-27 04:38:55.904512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:59.551 [2024-11-27 04:38:55.904559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:59.551 [2024-11-27 04:38:55.904571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.645 ms 00:21:59.551 [2024-11-27 04:38:55.904580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.551 [2024-11-27 04:38:55.904654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:59.551 [2024-11-27 04:38:55.904665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:21:59.551 [2024-11-27 04:38:55.904674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:21:59.551 [2024-11-27 04:38:55.904683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.551 [2024-11-27 04:38:55.904752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:59.551 [2024-11-27 04:38:55.904764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:59.551 [2024-11-27 04:38:55.904772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:21:59.551 [2024-11-27 04:38:55.904783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.551 [2024-11-27 04:38:55.904806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:59.551 [2024-11-27 04:38:55.904816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:59.551 [2024-11-27 04:38:55.904830] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:59.551 [2024-11-27 04:38:55.904842] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.551 [2024-11-27 04:38:55.904872] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:21:59.551 [2024-11-27 04:38:55.904885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:59.551 [2024-11-27 04:38:55.904895] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:21:59.551 [2024-11-27 04:38:55.904904] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:21:59.551 [2024-11-27 04:38:55.904912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.551 [2024-11-27 04:38:55.928047] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:59.551 [2024-11-27 04:38:55.928081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:59.551 [2024-11-27 04:38:55.928095] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.111 ms 00:21:59.551 [2024-11-27 04:38:55.928103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.551 [2024-11-27 04:38:55.928195] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:59.551 [2024-11-27 04:38:55.928205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:59.551 [2024-11-27 04:38:55.928217] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:21:59.551 [2024-11-27 04:38:55.928224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.551 [2024-11-27 04:38:55.929018] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:59.551 [2024-11-27 04:38:55.931889] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 274.084 ms, result 0 00:21:59.551 [2024-11-27 04:38:55.933937] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:59.551 Some configs were skipped because the RPC state that can call them passed over. 00:21:59.551 04:38:55 ftl.ftl_trim -- ftl/trim.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:21:59.808 [2024-11-27 04:38:56.158809] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:59.808 [2024-11-27 04:38:56.158864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:21:59.808 [2024-11-27 04:38:56.158878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.322 ms 00:21:59.808 [2024-11-27 04:38:56.158888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.808 [2024-11-27 04:38:56.158922] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.441 ms, result 0 00:21:59.808 true 00:21:59.808 04:38:56 ftl.ftl_trim -- ftl/trim.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:21:59.808 [2024-11-27 04:38:56.362638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:59.808 [2024-11-27 04:38:56.362787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:21:59.808 [2024-11-27 04:38:56.362809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.923 ms 00:21:59.808 [2024-11-27 04:38:56.362818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.808 [2024-11-27 04:38:56.362860] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.147 ms, result 0 00:21:59.808 true 00:21:59.808 04:38:56 ftl.ftl_trim -- ftl/trim.sh@81 -- # killprocess 76572 00:21:59.808 04:38:56 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 76572 ']' 00:21:59.808 04:38:56 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 76572 00:21:59.808 04:38:56 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:21:59.808 04:38:56 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:59.808 04:38:56 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76572 00:22:00.067 killing process with pid 76572 00:22:00.067 04:38:56 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:00.067 04:38:56 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:00.067 04:38:56 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76572' 00:22:00.067 04:38:56 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 76572 00:22:00.067 04:38:56 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 76572 00:22:00.640 [2024-11-27 04:38:57.099063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.640 [2024-11-27 04:38:57.099120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:22:00.640 [2024-11-27 04:38:57.099134] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:00.640 [2024-11-27 04:38:57.099143] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.640 [2024-11-27 04:38:57.099166] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:22:00.640 [2024-11-27 04:38:57.101826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.640 [2024-11-27 04:38:57.101857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:22:00.640 [2024-11-27 04:38:57.101871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.643 ms 00:22:00.640 [2024-11-27 04:38:57.101879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.640 [2024-11-27 04:38:57.102160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.640 [2024-11-27 04:38:57.102169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:22:00.640 [2024-11-27 04:38:57.102178] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.256 ms 00:22:00.640 [2024-11-27 04:38:57.102186] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.640 [2024-11-27 04:38:57.106121] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.640 [2024-11-27 04:38:57.106150] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:22:00.640 [2024-11-27 04:38:57.106161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.914 ms 00:22:00.640 [2024-11-27 04:38:57.106168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.640 [2024-11-27 04:38:57.113076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.640 [2024-11-27 04:38:57.113233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:22:00.640 [2024-11-27 04:38:57.113252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.872 ms 00:22:00.640 [2024-11-27 04:38:57.113261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.640 [2024-11-27 04:38:57.122291] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.640 [2024-11-27 04:38:57.122325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:22:00.640 [2024-11-27 04:38:57.122339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.972 ms 00:22:00.640 [2024-11-27 04:38:57.122346] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.640 [2024-11-27 04:38:57.129361] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.641 [2024-11-27 04:38:57.129393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:22:00.641 [2024-11-27 04:38:57.129405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.977 ms 00:22:00.641 [2024-11-27 04:38:57.129413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.641 [2024-11-27 04:38:57.129548] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.641 [2024-11-27 04:38:57.129558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:22:00.641 [2024-11-27 04:38:57.129568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.090 ms 00:22:00.641 [2024-11-27 04:38:57.129575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.641 [2024-11-27 04:38:57.138662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.641 [2024-11-27 04:38:57.138691] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:22:00.641 [2024-11-27 04:38:57.138702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.066 ms 00:22:00.641 [2024-11-27 04:38:57.138710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.641 [2024-11-27 04:38:57.147745] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.641 [2024-11-27 04:38:57.147864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:22:00.641 [2024-11-27 04:38:57.147884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.986 ms 00:22:00.641 [2024-11-27 04:38:57.147891] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.641 [2024-11-27 04:38:57.156849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.641 [2024-11-27 04:38:57.156877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:22:00.641 [2024-11-27 04:38:57.156888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.921 ms 00:22:00.641 [2024-11-27 04:38:57.156895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.641 [2024-11-27 04:38:57.165617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.641 [2024-11-27 04:38:57.165645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:22:00.641 [2024-11-27 04:38:57.165656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.647 ms 00:22:00.641 [2024-11-27 04:38:57.165663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.641 [2024-11-27 04:38:57.165695] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:22:00.641 [2024-11-27 04:38:57.165709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:22:00.641 [2024-11-27 04:38:57.165737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:22:00.641 [2024-11-27 04:38:57.165745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:22:00.641 [2024-11-27 04:38:57.165755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:00.641 [2024-11-27 04:38:57.165762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:00.641 [2024-11-27 04:38:57.165773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:00.641 [2024-11-27 04:38:57.165781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:00.641 [2024-11-27 04:38:57.165789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:00.641 [2024-11-27 04:38:57.165797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:00.641 [2024-11-27 04:38:57.165805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:00.641 [2024-11-27 04:38:57.165812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:00.641 [2024-11-27 04:38:57.165821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:00.641 [2024-11-27 04:38:57.165829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:00.641 [2024-11-27 04:38:57.165837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:00.641 [2024-11-27 04:38:57.165844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:00.641 [2024-11-27 04:38:57.165855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:00.641 [2024-11-27 04:38:57.165862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:00.641 [2024-11-27 04:38:57.165871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:00.641 [2024-11-27 04:38:57.165878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:22:00.641 [2024-11-27 04:38:57.165886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:22:00.641 [2024-11-27 04:38:57.165893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:22:00.641 [2024-11-27 04:38:57.165904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:22:00.641 [2024-11-27 04:38:57.165911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:22:00.641 [2024-11-27 04:38:57.165920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:22:00.641 [2024-11-27 04:38:57.165927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:22:00.641 [2024-11-27 04:38:57.165936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:22:00.641 [2024-11-27 04:38:57.165943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:22:00.641 [2024-11-27 04:38:57.165951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:22:00.641 [2024-11-27 04:38:57.165959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:22:00.641 [2024-11-27 04:38:57.165967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:22:00.641 [2024-11-27 04:38:57.165974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:22:00.641 [2024-11-27 04:38:57.165984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:22:00.641 [2024-11-27 04:38:57.165992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:22:00.641 [2024-11-27 04:38:57.166001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:22:00.641 [2024-11-27 04:38:57.166008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:22:00.641 [2024-11-27 04:38:57.166017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:22:00.641 [2024-11-27 04:38:57.166024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:22:00.641 [2024-11-27 04:38:57.166034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:22:00.641 [2024-11-27 04:38:57.166041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:22:00.641 [2024-11-27 04:38:57.166049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:22:00.641 [2024-11-27 04:38:57.166057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:22:00.641 [2024-11-27 04:38:57.166067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:22:00.641 [2024-11-27 04:38:57.166074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:22:00.641 [2024-11-27 04:38:57.166083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:22:00.641 [2024-11-27 04:38:57.166090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:22:00.641 [2024-11-27 04:38:57.166098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:22:00.641 [2024-11-27 04:38:57.166105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:22:00.641 [2024-11-27 04:38:57.166113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:22:00.641 [2024-11-27 04:38:57.166121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:22:00.641 [2024-11-27 04:38:57.166129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:22:00.641 [2024-11-27 04:38:57.166136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:22:00.641 [2024-11-27 04:38:57.166145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:22:00.641 [2024-11-27 04:38:57.166152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:22:00.641 [2024-11-27 04:38:57.166162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:22:00.641 [2024-11-27 04:38:57.166169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:22:00.641 [2024-11-27 04:38:57.166177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:22:00.641 [2024-11-27 04:38:57.166184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:22:00.641 [2024-11-27 04:38:57.166193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:22:00.641 [2024-11-27 04:38:57.166200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:22:00.641 [2024-11-27 04:38:57.166208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:22:00.642 [2024-11-27 04:38:57.166215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:22:00.642 [2024-11-27 04:38:57.166224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:22:00.642 [2024-11-27 04:38:57.166231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:22:00.642 [2024-11-27 04:38:57.166243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:22:00.642 [2024-11-27 04:38:57.166251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:22:00.642 [2024-11-27 04:38:57.166260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:22:00.642 [2024-11-27 04:38:57.166267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:22:00.642 [2024-11-27 04:38:57.166275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:22:00.642 [2024-11-27 04:38:57.166282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:22:00.642 [2024-11-27 04:38:57.166294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:22:00.642 [2024-11-27 04:38:57.166301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:22:00.642 [2024-11-27 04:38:57.166310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:22:00.642 [2024-11-27 04:38:57.166317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:22:00.642 [2024-11-27 04:38:57.166326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:22:00.642 [2024-11-27 04:38:57.166333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:22:00.642 [2024-11-27 04:38:57.166342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:22:00.642 [2024-11-27 04:38:57.166349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:22:00.642 [2024-11-27 04:38:57.166357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:22:00.642 [2024-11-27 04:38:57.166364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:22:00.642 [2024-11-27 04:38:57.166373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:22:00.642 [2024-11-27 04:38:57.166380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:22:00.642 [2024-11-27 04:38:57.166389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:22:00.642 [2024-11-27 04:38:57.166395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:22:00.642 [2024-11-27 04:38:57.166404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:22:00.642 [2024-11-27 04:38:57.166411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:22:00.642 [2024-11-27 04:38:57.166421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:22:00.642 [2024-11-27 04:38:57.166429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:22:00.642 [2024-11-27 04:38:57.166437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:22:00.642 [2024-11-27 04:38:57.166444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:22:00.642 [2024-11-27 04:38:57.166453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:22:00.642 [2024-11-27 04:38:57.166460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:22:00.642 [2024-11-27 04:38:57.166469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:22:00.642 [2024-11-27 04:38:57.166476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:22:00.642 [2024-11-27 04:38:57.166485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:22:00.642 [2024-11-27 04:38:57.166492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:22:00.642 [2024-11-27 04:38:57.166502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:22:00.642 [2024-11-27 04:38:57.166511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:22:00.642 [2024-11-27 04:38:57.166520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:22:00.642 [2024-11-27 04:38:57.166527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:22:00.642 [2024-11-27 04:38:57.166535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:22:00.642 [2024-11-27 04:38:57.166557] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:22:00.642 [2024-11-27 04:38:57.166569] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: b290c8a9-93b3-40e8-884a-c7cbc275854f 00:22:00.642 [2024-11-27 04:38:57.166579] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:22:00.642 [2024-11-27 04:38:57.166587] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:22:00.642 [2024-11-27 04:38:57.166594] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:22:00.642 [2024-11-27 04:38:57.166603] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:22:00.642 [2024-11-27 04:38:57.166610] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:22:00.642 [2024-11-27 04:38:57.166619] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:22:00.642 [2024-11-27 04:38:57.166625] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:22:00.642 [2024-11-27 04:38:57.166633] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:22:00.642 [2024-11-27 04:38:57.166639] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:22:00.642 [2024-11-27 04:38:57.166647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.642 [2024-11-27 04:38:57.166654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:22:00.642 [2024-11-27 04:38:57.166663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.953 ms 00:22:00.642 [2024-11-27 04:38:57.166671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.642 [2024-11-27 04:38:57.178844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.642 [2024-11-27 04:38:57.178958] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:22:00.642 [2024-11-27 04:38:57.178977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.152 ms 00:22:00.642 [2024-11-27 04:38:57.178985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.642 [2024-11-27 04:38:57.179340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.642 [2024-11-27 04:38:57.179356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:22:00.642 [2024-11-27 04:38:57.179368] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.312 ms 00:22:00.642 [2024-11-27 04:38:57.179375] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.909 [2024-11-27 04:38:57.222800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:00.909 [2024-11-27 04:38:57.222834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:00.909 [2024-11-27 04:38:57.222847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:00.909 [2024-11-27 04:38:57.222855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.909 [2024-11-27 04:38:57.222955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:00.909 [2024-11-27 04:38:57.222965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:00.909 [2024-11-27 04:38:57.222977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:00.909 [2024-11-27 04:38:57.222984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.909 [2024-11-27 04:38:57.223029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:00.909 [2024-11-27 04:38:57.223038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:00.909 [2024-11-27 04:38:57.223049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:00.909 [2024-11-27 04:38:57.223056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.909 [2024-11-27 04:38:57.223075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:00.909 [2024-11-27 04:38:57.223082] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:00.909 [2024-11-27 04:38:57.223091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:00.909 [2024-11-27 04:38:57.223099] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.909 [2024-11-27 04:38:57.298573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:00.909 [2024-11-27 04:38:57.298719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:00.909 [2024-11-27 04:38:57.298756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:00.909 [2024-11-27 04:38:57.298765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.909 [2024-11-27 04:38:57.361300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:00.909 [2024-11-27 04:38:57.361344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:00.909 [2024-11-27 04:38:57.361359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:00.909 [2024-11-27 04:38:57.361367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.909 [2024-11-27 04:38:57.361448] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:00.909 [2024-11-27 04:38:57.361457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:00.909 [2024-11-27 04:38:57.361469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:00.909 [2024-11-27 04:38:57.361476] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.909 [2024-11-27 04:38:57.361505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:00.909 [2024-11-27 04:38:57.361514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:00.909 [2024-11-27 04:38:57.361523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:00.909 [2024-11-27 04:38:57.361530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.909 [2024-11-27 04:38:57.361621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:00.909 [2024-11-27 04:38:57.361631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:00.909 [2024-11-27 04:38:57.361640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:00.909 [2024-11-27 04:38:57.361647] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.909 [2024-11-27 04:38:57.361679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:00.909 [2024-11-27 04:38:57.361688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:22:00.909 [2024-11-27 04:38:57.361697] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:00.909 [2024-11-27 04:38:57.361704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.909 [2024-11-27 04:38:57.361765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:00.909 [2024-11-27 04:38:57.361774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:00.909 [2024-11-27 04:38:57.361798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:00.909 [2024-11-27 04:38:57.361806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.909 [2024-11-27 04:38:57.361847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:00.909 [2024-11-27 04:38:57.361856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:00.909 [2024-11-27 04:38:57.361866] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:00.909 [2024-11-27 04:38:57.361873] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.909 [2024-11-27 04:38:57.361999] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 262.917 ms, result 0 00:22:01.481 04:38:58 ftl.ftl_trim -- ftl/trim.sh@84 -- # file=/home/vagrant/spdk_repo/spdk/test/ftl/data 00:22:01.481 04:38:58 ftl.ftl_trim -- ftl/trim.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:22:01.746 [2024-11-27 04:38:58.099442] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:22:01.746 [2024-11-27 04:38:58.099569] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76619 ] 00:22:01.746 [2024-11-27 04:38:58.259205] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:02.009 [2024-11-27 04:38:58.358946] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:02.271 [2024-11-27 04:38:58.617537] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:02.271 [2024-11-27 04:38:58.617598] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:02.271 [2024-11-27 04:38:58.772132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.271 [2024-11-27 04:38:58.772186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:22:02.271 [2024-11-27 04:38:58.772199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:02.271 [2024-11-27 04:38:58.772208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.271 [2024-11-27 04:38:58.774835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.271 [2024-11-27 04:38:58.774872] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:02.271 [2024-11-27 04:38:58.774883] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.609 ms 00:22:02.271 [2024-11-27 04:38:58.774892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.271 [2024-11-27 04:38:58.774963] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:22:02.271 [2024-11-27 04:38:58.775614] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:22:02.271 [2024-11-27 04:38:58.775638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.271 [2024-11-27 04:38:58.775646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:02.271 [2024-11-27 04:38:58.775655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.682 ms 00:22:02.271 [2024-11-27 04:38:58.775662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.271 [2024-11-27 04:38:58.776893] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:22:02.271 [2024-11-27 04:38:58.789026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.272 [2024-11-27 04:38:58.789059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:22:02.272 [2024-11-27 04:38:58.789070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.134 ms 00:22:02.272 [2024-11-27 04:38:58.789079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.272 [2024-11-27 04:38:58.789166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.272 [2024-11-27 04:38:58.789177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:22:02.272 [2024-11-27 04:38:58.789186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:22:02.272 [2024-11-27 04:38:58.789193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.272 [2024-11-27 04:38:58.793960] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.272 [2024-11-27 04:38:58.793991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:02.272 [2024-11-27 04:38:58.794000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.715 ms 00:22:02.272 [2024-11-27 04:38:58.794007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.272 [2024-11-27 04:38:58.794091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.272 [2024-11-27 04:38:58.794100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:02.272 [2024-11-27 04:38:58.794108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:22:02.272 [2024-11-27 04:38:58.794116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.272 [2024-11-27 04:38:58.794142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.272 [2024-11-27 04:38:58.794150] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:22:02.272 [2024-11-27 04:38:58.794158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:22:02.272 [2024-11-27 04:38:58.794165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.272 [2024-11-27 04:38:58.794185] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:22:02.272 [2024-11-27 04:38:58.797435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.272 [2024-11-27 04:38:58.797463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:02.272 [2024-11-27 04:38:58.797472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.255 ms 00:22:02.272 [2024-11-27 04:38:58.797479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.272 [2024-11-27 04:38:58.797516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.272 [2024-11-27 04:38:58.797525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:22:02.272 [2024-11-27 04:38:58.797532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:22:02.272 [2024-11-27 04:38:58.797539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.272 [2024-11-27 04:38:58.797559] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:22:02.272 [2024-11-27 04:38:58.797575] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:22:02.272 [2024-11-27 04:38:58.797608] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:22:02.272 [2024-11-27 04:38:58.797623] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:22:02.272 [2024-11-27 04:38:58.797739] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:22:02.272 [2024-11-27 04:38:58.797750] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:22:02.272 [2024-11-27 04:38:58.797760] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:22:02.272 [2024-11-27 04:38:58.797773] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:22:02.272 [2024-11-27 04:38:58.797782] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:22:02.272 [2024-11-27 04:38:58.797790] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:22:02.272 [2024-11-27 04:38:58.797797] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:22:02.272 [2024-11-27 04:38:58.797804] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:22:02.272 [2024-11-27 04:38:58.797811] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:22:02.272 [2024-11-27 04:38:58.797819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.272 [2024-11-27 04:38:58.797826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:22:02.272 [2024-11-27 04:38:58.797834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.262 ms 00:22:02.272 [2024-11-27 04:38:58.797841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.272 [2024-11-27 04:38:58.797928] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.272 [2024-11-27 04:38:58.797938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:22:02.272 [2024-11-27 04:38:58.797945] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:22:02.272 [2024-11-27 04:38:58.797952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.272 [2024-11-27 04:38:58.798051] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:22:02.272 [2024-11-27 04:38:58.798060] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:22:02.272 [2024-11-27 04:38:58.798067] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:02.272 [2024-11-27 04:38:58.798075] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:02.272 [2024-11-27 04:38:58.798082] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:22:02.272 [2024-11-27 04:38:58.798089] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:22:02.272 [2024-11-27 04:38:58.798096] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:22:02.272 [2024-11-27 04:38:58.798103] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:22:02.272 [2024-11-27 04:38:58.798109] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:22:02.272 [2024-11-27 04:38:58.798116] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:02.272 [2024-11-27 04:38:58.798123] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:22:02.272 [2024-11-27 04:38:58.798136] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:22:02.272 [2024-11-27 04:38:58.798142] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:02.272 [2024-11-27 04:38:58.798149] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:22:02.272 [2024-11-27 04:38:58.798156] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:22:02.272 [2024-11-27 04:38:58.798163] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:02.272 [2024-11-27 04:38:58.798169] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:22:02.272 [2024-11-27 04:38:58.798176] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:22:02.272 [2024-11-27 04:38:58.798182] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:02.272 [2024-11-27 04:38:58.798189] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:22:02.272 [2024-11-27 04:38:58.798195] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:22:02.272 [2024-11-27 04:38:58.798201] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:02.272 [2024-11-27 04:38:58.798207] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:22:02.272 [2024-11-27 04:38:58.798214] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:22:02.272 [2024-11-27 04:38:58.798220] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:02.272 [2024-11-27 04:38:58.798227] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:22:02.272 [2024-11-27 04:38:58.798233] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:22:02.272 [2024-11-27 04:38:58.798239] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:02.272 [2024-11-27 04:38:58.798245] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:22:02.272 [2024-11-27 04:38:58.798252] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:22:02.272 [2024-11-27 04:38:58.798258] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:02.272 [2024-11-27 04:38:58.798264] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:22:02.272 [2024-11-27 04:38:58.798270] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:22:02.272 [2024-11-27 04:38:58.798276] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:02.272 [2024-11-27 04:38:58.798283] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:22:02.272 [2024-11-27 04:38:58.798290] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:22:02.272 [2024-11-27 04:38:58.798296] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:02.272 [2024-11-27 04:38:58.798303] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:22:02.272 [2024-11-27 04:38:58.798309] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:22:02.272 [2024-11-27 04:38:58.798316] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:02.272 [2024-11-27 04:38:58.798322] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:22:02.272 [2024-11-27 04:38:58.798329] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:22:02.272 [2024-11-27 04:38:58.798335] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:02.272 [2024-11-27 04:38:58.798342] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:22:02.272 [2024-11-27 04:38:58.798349] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:22:02.272 [2024-11-27 04:38:58.798358] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:02.272 [2024-11-27 04:38:58.798365] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:02.272 [2024-11-27 04:38:58.798373] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:22:02.272 [2024-11-27 04:38:58.798379] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:22:02.272 [2024-11-27 04:38:58.798386] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:22:02.272 [2024-11-27 04:38:58.798392] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:22:02.272 [2024-11-27 04:38:58.798399] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:22:02.272 [2024-11-27 04:38:58.798405] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:22:02.272 [2024-11-27 04:38:58.798413] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:22:02.272 [2024-11-27 04:38:58.798421] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:02.273 [2024-11-27 04:38:58.798430] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:22:02.273 [2024-11-27 04:38:58.798437] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:22:02.273 [2024-11-27 04:38:58.798444] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:22:02.273 [2024-11-27 04:38:58.798451] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:22:02.273 [2024-11-27 04:38:58.798458] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:22:02.273 [2024-11-27 04:38:58.798465] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:22:02.273 [2024-11-27 04:38:58.798471] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:22:02.273 [2024-11-27 04:38:58.798479] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:22:02.273 [2024-11-27 04:38:58.798485] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:22:02.273 [2024-11-27 04:38:58.798493] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:22:02.273 [2024-11-27 04:38:58.798500] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:22:02.273 [2024-11-27 04:38:58.798508] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:22:02.273 [2024-11-27 04:38:58.798515] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:22:02.273 [2024-11-27 04:38:58.798522] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:22:02.273 [2024-11-27 04:38:58.798529] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:22:02.273 [2024-11-27 04:38:58.798537] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:02.273 [2024-11-27 04:38:58.798545] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:02.273 [2024-11-27 04:38:58.798552] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:22:02.273 [2024-11-27 04:38:58.798559] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:22:02.273 [2024-11-27 04:38:58.798566] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:22:02.273 [2024-11-27 04:38:58.798573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.273 [2024-11-27 04:38:58.798582] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:22:02.273 [2024-11-27 04:38:58.798591] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.590 ms 00:22:02.273 [2024-11-27 04:38:58.798598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.273 [2024-11-27 04:38:58.824218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.273 [2024-11-27 04:38:58.824371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:02.273 [2024-11-27 04:38:58.824387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.554 ms 00:22:02.273 [2024-11-27 04:38:58.824395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.273 [2024-11-27 04:38:58.824525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.273 [2024-11-27 04:38:58.824535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:22:02.273 [2024-11-27 04:38:58.824543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:22:02.273 [2024-11-27 04:38:58.824550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.535 [2024-11-27 04:38:58.871205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.535 [2024-11-27 04:38:58.871249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:02.535 [2024-11-27 04:38:58.871265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.635 ms 00:22:02.535 [2024-11-27 04:38:58.871273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.535 [2024-11-27 04:38:58.871376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.535 [2024-11-27 04:38:58.871388] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:02.535 [2024-11-27 04:38:58.871397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:22:02.535 [2024-11-27 04:38:58.871405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.535 [2024-11-27 04:38:58.871710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.535 [2024-11-27 04:38:58.871744] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:02.535 [2024-11-27 04:38:58.871760] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.284 ms 00:22:02.535 [2024-11-27 04:38:58.871768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.535 [2024-11-27 04:38:58.871894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.535 [2024-11-27 04:38:58.871907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:02.535 [2024-11-27 04:38:58.871916] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.101 ms 00:22:02.535 [2024-11-27 04:38:58.871923] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.535 [2024-11-27 04:38:58.885039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.535 [2024-11-27 04:38:58.885182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:02.535 [2024-11-27 04:38:58.885199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.096 ms 00:22:02.535 [2024-11-27 04:38:58.885208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.535 [2024-11-27 04:38:58.897109] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:22:02.535 [2024-11-27 04:38:58.897141] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:22:02.535 [2024-11-27 04:38:58.897152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.535 [2024-11-27 04:38:58.897160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:22:02.535 [2024-11-27 04:38:58.897168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.843 ms 00:22:02.535 [2024-11-27 04:38:58.897176] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.535 [2024-11-27 04:38:58.921189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.535 [2024-11-27 04:38:58.921224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:22:02.535 [2024-11-27 04:38:58.921235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.941 ms 00:22:02.535 [2024-11-27 04:38:58.921244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.535 [2024-11-27 04:38:58.932745] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.535 [2024-11-27 04:38:58.932775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:22:02.535 [2024-11-27 04:38:58.932784] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.432 ms 00:22:02.535 [2024-11-27 04:38:58.932791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.535 [2024-11-27 04:38:58.943802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.535 [2024-11-27 04:38:58.943934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:22:02.535 [2024-11-27 04:38:58.943949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.940 ms 00:22:02.535 [2024-11-27 04:38:58.943956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.535 [2024-11-27 04:38:58.944568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.535 [2024-11-27 04:38:58.944588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:22:02.535 [2024-11-27 04:38:58.944597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.524 ms 00:22:02.535 [2024-11-27 04:38:58.944604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.535 [2024-11-27 04:38:58.999030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.535 [2024-11-27 04:38:58.999211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:22:02.535 [2024-11-27 04:38:58.999229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 54.403 ms 00:22:02.535 [2024-11-27 04:38:58.999237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.535 [2024-11-27 04:38:59.009758] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:22:02.535 [2024-11-27 04:38:59.023490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.535 [2024-11-27 04:38:59.023529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:22:02.535 [2024-11-27 04:38:59.023542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.150 ms 00:22:02.535 [2024-11-27 04:38:59.023555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.535 [2024-11-27 04:38:59.023648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.535 [2024-11-27 04:38:59.023659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:22:02.535 [2024-11-27 04:38:59.023668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:22:02.535 [2024-11-27 04:38:59.023675] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.535 [2024-11-27 04:38:59.023720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.535 [2024-11-27 04:38:59.023755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:22:02.535 [2024-11-27 04:38:59.023763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:22:02.535 [2024-11-27 04:38:59.023774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.535 [2024-11-27 04:38:59.023804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.535 [2024-11-27 04:38:59.023812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:22:02.535 [2024-11-27 04:38:59.023820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:22:02.535 [2024-11-27 04:38:59.023826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.535 [2024-11-27 04:38:59.023855] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:22:02.535 [2024-11-27 04:38:59.023864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.535 [2024-11-27 04:38:59.023871] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:22:02.535 [2024-11-27 04:38:59.023879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:22:02.535 [2024-11-27 04:38:59.023886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.535 [2024-11-27 04:38:59.046520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.535 [2024-11-27 04:38:59.046569] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:22:02.535 [2024-11-27 04:38:59.046581] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.611 ms 00:22:02.535 [2024-11-27 04:38:59.046590] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.535 [2024-11-27 04:38:59.046679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.535 [2024-11-27 04:38:59.046690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:22:02.536 [2024-11-27 04:38:59.046699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:22:02.536 [2024-11-27 04:38:59.046706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.536 [2024-11-27 04:38:59.047881] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:02.536 [2024-11-27 04:38:59.050866] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 275.460 ms, result 0 00:22:02.536 [2024-11-27 04:38:59.051420] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:02.536 [2024-11-27 04:38:59.064200] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:03.483  [2024-11-27T04:39:01.481Z] Copying: 45/256 [MB] (45 MBps) [2024-11-27T04:39:02.421Z] Copying: 90/256 [MB] (45 MBps) [2024-11-27T04:39:03.357Z] Copying: 134/256 [MB] (44 MBps) [2024-11-27T04:39:04.293Z] Copying: 177/256 [MB] (42 MBps) [2024-11-27T04:39:05.228Z] Copying: 221/256 [MB] (44 MBps) [2024-11-27T04:39:05.228Z] Copying: 256/256 [MB] (average 43 MBps)[2024-11-27 04:39:04.909825] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:08.641 [2024-11-27 04:39:04.919045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:08.641 [2024-11-27 04:39:04.919082] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:22:08.641 [2024-11-27 04:39:04.919101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:08.641 [2024-11-27 04:39:04.919109] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.641 [2024-11-27 04:39:04.919130] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:22:08.641 [2024-11-27 04:39:04.921710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:08.641 [2024-11-27 04:39:04.921749] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:22:08.641 [2024-11-27 04:39:04.921760] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.567 ms 00:22:08.641 [2024-11-27 04:39:04.921767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.641 [2024-11-27 04:39:04.922014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:08.641 [2024-11-27 04:39:04.922028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:22:08.641 [2024-11-27 04:39:04.922035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.228 ms 00:22:08.641 [2024-11-27 04:39:04.922043] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.641 [2024-11-27 04:39:04.925804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:08.641 [2024-11-27 04:39:04.925825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:22:08.641 [2024-11-27 04:39:04.925835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.744 ms 00:22:08.641 [2024-11-27 04:39:04.925843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.641 [2024-11-27 04:39:04.932719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:08.641 [2024-11-27 04:39:04.932748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:22:08.641 [2024-11-27 04:39:04.932757] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.860 ms 00:22:08.641 [2024-11-27 04:39:04.932765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.641 [2024-11-27 04:39:04.955457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:08.641 [2024-11-27 04:39:04.955486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:22:08.641 [2024-11-27 04:39:04.955497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.639 ms 00:22:08.641 [2024-11-27 04:39:04.955504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.641 [2024-11-27 04:39:04.969283] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:08.641 [2024-11-27 04:39:04.969316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:22:08.641 [2024-11-27 04:39:04.969333] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.758 ms 00:22:08.641 [2024-11-27 04:39:04.969342] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.641 [2024-11-27 04:39:04.969475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:08.641 [2024-11-27 04:39:04.969485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:22:08.641 [2024-11-27 04:39:04.969500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.085 ms 00:22:08.641 [2024-11-27 04:39:04.969507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.641 [2024-11-27 04:39:04.992457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:08.641 [2024-11-27 04:39:04.992491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:22:08.641 [2024-11-27 04:39:04.992502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.934 ms 00:22:08.641 [2024-11-27 04:39:04.992510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.641 [2024-11-27 04:39:05.014572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:08.641 [2024-11-27 04:39:05.014601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:22:08.641 [2024-11-27 04:39:05.014611] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.041 ms 00:22:08.641 [2024-11-27 04:39:05.014618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.641 [2024-11-27 04:39:05.036422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:08.641 [2024-11-27 04:39:05.036452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:22:08.641 [2024-11-27 04:39:05.036461] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.784 ms 00:22:08.641 [2024-11-27 04:39:05.036469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.641 [2024-11-27 04:39:05.058324] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:08.641 [2024-11-27 04:39:05.058463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:22:08.641 [2024-11-27 04:39:05.058478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.809 ms 00:22:08.641 [2024-11-27 04:39:05.058485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.641 [2024-11-27 04:39:05.058507] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:22:08.641 [2024-11-27 04:39:05.058521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:22:08.641 [2024-11-27 04:39:05.058531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:22:08.641 [2024-11-27 04:39:05.058539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:22:08.641 [2024-11-27 04:39:05.058547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:08.641 [2024-11-27 04:39:05.058554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:08.641 [2024-11-27 04:39:05.058562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:08.641 [2024-11-27 04:39:05.058570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:08.641 [2024-11-27 04:39:05.058577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:08.641 [2024-11-27 04:39:05.058585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:08.641 [2024-11-27 04:39:05.058593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:08.641 [2024-11-27 04:39:05.058600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:08.641 [2024-11-27 04:39:05.058608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:08.641 [2024-11-27 04:39:05.058615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:08.641 [2024-11-27 04:39:05.058623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:08.641 [2024-11-27 04:39:05.058630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:08.641 [2024-11-27 04:39:05.058637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:08.641 [2024-11-27 04:39:05.058644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:08.641 [2024-11-27 04:39:05.058651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:08.641 [2024-11-27 04:39:05.058658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:22:08.641 [2024-11-27 04:39:05.058666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:22:08.641 [2024-11-27 04:39:05.058673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:22:08.641 [2024-11-27 04:39:05.058680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:22:08.641 [2024-11-27 04:39:05.058687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:22:08.641 [2024-11-27 04:39:05.058694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:22:08.641 [2024-11-27 04:39:05.058702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:22:08.641 [2024-11-27 04:39:05.058709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:22:08.641 [2024-11-27 04:39:05.058716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:22:08.641 [2024-11-27 04:39:05.058739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:22:08.641 [2024-11-27 04:39:05.058747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:22:08.642 [2024-11-27 04:39:05.058756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:22:08.642 [2024-11-27 04:39:05.058763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:22:08.642 [2024-11-27 04:39:05.058771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:22:08.642 [2024-11-27 04:39:05.058778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:22:08.642 [2024-11-27 04:39:05.058786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:22:08.642 [2024-11-27 04:39:05.058793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:22:08.642 [2024-11-27 04:39:05.058801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:22:08.642 [2024-11-27 04:39:05.058809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:22:08.642 [2024-11-27 04:39:05.058816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:22:08.642 [2024-11-27 04:39:05.058823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:22:08.642 [2024-11-27 04:39:05.058831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:22:08.642 [2024-11-27 04:39:05.058838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:22:08.642 [2024-11-27 04:39:05.058845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:22:08.642 [2024-11-27 04:39:05.058853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:22:08.642 [2024-11-27 04:39:05.058860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:22:08.642 [2024-11-27 04:39:05.058867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:22:08.642 [2024-11-27 04:39:05.058874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:22:08.642 [2024-11-27 04:39:05.058882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:22:08.642 [2024-11-27 04:39:05.058889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:22:08.642 [2024-11-27 04:39:05.058897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:22:08.642 [2024-11-27 04:39:05.058904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:22:08.642 [2024-11-27 04:39:05.058911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:22:08.642 [2024-11-27 04:39:05.058918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:22:08.642 [2024-11-27 04:39:05.058925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:22:08.642 [2024-11-27 04:39:05.058932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:22:08.642 [2024-11-27 04:39:05.058939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:22:08.642 [2024-11-27 04:39:05.058946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:22:08.642 [2024-11-27 04:39:05.058954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:22:08.642 [2024-11-27 04:39:05.058961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:22:08.642 [2024-11-27 04:39:05.058969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:22:08.642 [2024-11-27 04:39:05.058975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:22:08.642 [2024-11-27 04:39:05.058983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:22:08.642 [2024-11-27 04:39:05.058991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:22:08.642 [2024-11-27 04:39:05.058998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:22:08.642 [2024-11-27 04:39:05.059005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:22:08.642 [2024-11-27 04:39:05.059013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:22:08.642 [2024-11-27 04:39:05.059020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:22:08.642 [2024-11-27 04:39:05.059028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:22:08.642 [2024-11-27 04:39:05.059035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:22:08.642 [2024-11-27 04:39:05.059043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:22:08.642 [2024-11-27 04:39:05.059050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:22:08.642 [2024-11-27 04:39:05.059057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:22:08.642 [2024-11-27 04:39:05.059064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:22:08.642 [2024-11-27 04:39:05.059071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:22:08.642 [2024-11-27 04:39:05.059079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:22:08.642 [2024-11-27 04:39:05.059086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:22:08.642 [2024-11-27 04:39:05.059093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:22:08.642 [2024-11-27 04:39:05.059100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:22:08.642 [2024-11-27 04:39:05.059108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:22:08.642 [2024-11-27 04:39:05.059115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:22:08.642 [2024-11-27 04:39:05.059122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:22:08.642 [2024-11-27 04:39:05.059129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:22:08.642 [2024-11-27 04:39:05.059136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:22:08.642 [2024-11-27 04:39:05.059143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:22:08.642 [2024-11-27 04:39:05.059150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:22:08.642 [2024-11-27 04:39:05.059158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:22:08.642 [2024-11-27 04:39:05.059166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:22:08.642 [2024-11-27 04:39:05.059173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:22:08.642 [2024-11-27 04:39:05.059180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:22:08.642 [2024-11-27 04:39:05.059187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:22:08.642 [2024-11-27 04:39:05.059194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:22:08.642 [2024-11-27 04:39:05.059201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:22:08.642 [2024-11-27 04:39:05.059208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:22:08.642 [2024-11-27 04:39:05.059215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:22:08.642 [2024-11-27 04:39:05.059231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:22:08.642 [2024-11-27 04:39:05.059239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:22:08.642 [2024-11-27 04:39:05.059246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:22:08.642 [2024-11-27 04:39:05.059253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:22:08.642 [2024-11-27 04:39:05.059261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:22:08.642 [2024-11-27 04:39:05.059268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:22:08.642 [2024-11-27 04:39:05.059276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:22:08.642 [2024-11-27 04:39:05.059291] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:22:08.642 [2024-11-27 04:39:05.059298] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: b290c8a9-93b3-40e8-884a-c7cbc275854f 00:22:08.642 [2024-11-27 04:39:05.059306] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:22:08.642 [2024-11-27 04:39:05.059313] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:22:08.642 [2024-11-27 04:39:05.059320] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:22:08.642 [2024-11-27 04:39:05.059328] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:22:08.642 [2024-11-27 04:39:05.059335] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:22:08.642 [2024-11-27 04:39:05.059342] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:22:08.642 [2024-11-27 04:39:05.059351] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:22:08.642 [2024-11-27 04:39:05.059358] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:22:08.642 [2024-11-27 04:39:05.059364] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:22:08.642 [2024-11-27 04:39:05.059371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:08.642 [2024-11-27 04:39:05.059378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:22:08.642 [2024-11-27 04:39:05.059387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.865 ms 00:22:08.642 [2024-11-27 04:39:05.059394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.642 [2024-11-27 04:39:05.071627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:08.642 [2024-11-27 04:39:05.071655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:22:08.642 [2024-11-27 04:39:05.071665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.216 ms 00:22:08.642 [2024-11-27 04:39:05.071673] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.642 [2024-11-27 04:39:05.072047] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:08.642 [2024-11-27 04:39:05.072061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:22:08.642 [2024-11-27 04:39:05.072070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.337 ms 00:22:08.643 [2024-11-27 04:39:05.072077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.643 [2024-11-27 04:39:05.106465] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:08.643 [2024-11-27 04:39:05.106502] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:08.643 [2024-11-27 04:39:05.106512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:08.643 [2024-11-27 04:39:05.106523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.643 [2024-11-27 04:39:05.106612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:08.643 [2024-11-27 04:39:05.106621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:08.643 [2024-11-27 04:39:05.106629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:08.643 [2024-11-27 04:39:05.106636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.643 [2024-11-27 04:39:05.106675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:08.643 [2024-11-27 04:39:05.106684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:08.643 [2024-11-27 04:39:05.106691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:08.643 [2024-11-27 04:39:05.106698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.643 [2024-11-27 04:39:05.106717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:08.643 [2024-11-27 04:39:05.106742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:08.643 [2024-11-27 04:39:05.106750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:08.643 [2024-11-27 04:39:05.106757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.643 [2024-11-27 04:39:05.183957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:08.643 [2024-11-27 04:39:05.184004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:08.643 [2024-11-27 04:39:05.184015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:08.643 [2024-11-27 04:39:05.184023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.900 [2024-11-27 04:39:05.247165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:08.901 [2024-11-27 04:39:05.247211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:08.901 [2024-11-27 04:39:05.247222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:08.901 [2024-11-27 04:39:05.247230] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.901 [2024-11-27 04:39:05.247298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:08.901 [2024-11-27 04:39:05.247307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:08.901 [2024-11-27 04:39:05.247314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:08.901 [2024-11-27 04:39:05.247322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.901 [2024-11-27 04:39:05.247349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:08.901 [2024-11-27 04:39:05.247359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:08.901 [2024-11-27 04:39:05.247367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:08.901 [2024-11-27 04:39:05.247374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.901 [2024-11-27 04:39:05.247456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:08.901 [2024-11-27 04:39:05.247466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:08.901 [2024-11-27 04:39:05.247474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:08.901 [2024-11-27 04:39:05.247481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.901 [2024-11-27 04:39:05.247509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:08.901 [2024-11-27 04:39:05.247517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:22:08.901 [2024-11-27 04:39:05.247527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:08.901 [2024-11-27 04:39:05.247535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.901 [2024-11-27 04:39:05.247569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:08.901 [2024-11-27 04:39:05.247577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:08.901 [2024-11-27 04:39:05.247585] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:08.901 [2024-11-27 04:39:05.247592] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.901 [2024-11-27 04:39:05.247631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:08.901 [2024-11-27 04:39:05.247643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:08.901 [2024-11-27 04:39:05.247650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:08.901 [2024-11-27 04:39:05.247657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.901 [2024-11-27 04:39:05.247810] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 328.754 ms, result 0 00:22:09.488 00:22:09.488 00:22:09.488 04:39:05 ftl.ftl_trim -- ftl/trim.sh@86 -- # cmp --bytes=4194304 /home/vagrant/spdk_repo/spdk/test/ftl/data /dev/zero 00:22:09.488 04:39:05 ftl.ftl_trim -- ftl/trim.sh@87 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/data 00:22:10.055 04:39:06 ftl.ftl_trim -- ftl/trim.sh@90 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --count=1024 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:22:10.055 [2024-11-27 04:39:06.537210] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:22:10.055 [2024-11-27 04:39:06.537325] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76717 ] 00:22:10.313 [2024-11-27 04:39:06.696982] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:10.313 [2024-11-27 04:39:06.800296] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:10.571 [2024-11-27 04:39:07.060239] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:10.571 [2024-11-27 04:39:07.060305] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:10.831 [2024-11-27 04:39:07.214270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:10.831 [2024-11-27 04:39:07.214322] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:22:10.831 [2024-11-27 04:39:07.214335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:10.831 [2024-11-27 04:39:07.214343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.831 [2024-11-27 04:39:07.216981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:10.831 [2024-11-27 04:39:07.217131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:10.831 [2024-11-27 04:39:07.217148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.620 ms 00:22:10.831 [2024-11-27 04:39:07.217155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.831 [2024-11-27 04:39:07.217222] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:22:10.831 [2024-11-27 04:39:07.217935] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:22:10.831 [2024-11-27 04:39:07.217956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:10.831 [2024-11-27 04:39:07.217964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:10.831 [2024-11-27 04:39:07.217973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.741 ms 00:22:10.831 [2024-11-27 04:39:07.217980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.831 [2024-11-27 04:39:07.219237] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:22:10.831 [2024-11-27 04:39:07.231404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:10.831 [2024-11-27 04:39:07.231436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:22:10.831 [2024-11-27 04:39:07.231448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.169 ms 00:22:10.831 [2024-11-27 04:39:07.231456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.831 [2024-11-27 04:39:07.231540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:10.831 [2024-11-27 04:39:07.231552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:22:10.831 [2024-11-27 04:39:07.231560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:22:10.831 [2024-11-27 04:39:07.231567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.831 [2024-11-27 04:39:07.236399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:10.831 [2024-11-27 04:39:07.236428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:10.831 [2024-11-27 04:39:07.236437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.792 ms 00:22:10.831 [2024-11-27 04:39:07.236444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.831 [2024-11-27 04:39:07.236528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:10.831 [2024-11-27 04:39:07.236538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:10.831 [2024-11-27 04:39:07.236545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:22:10.831 [2024-11-27 04:39:07.236553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.831 [2024-11-27 04:39:07.236579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:10.831 [2024-11-27 04:39:07.236586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:22:10.831 [2024-11-27 04:39:07.236594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:22:10.831 [2024-11-27 04:39:07.236601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.831 [2024-11-27 04:39:07.236621] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:22:10.831 [2024-11-27 04:39:07.239976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:10.831 [2024-11-27 04:39:07.240001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:10.831 [2024-11-27 04:39:07.240010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.360 ms 00:22:10.831 [2024-11-27 04:39:07.240017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.831 [2024-11-27 04:39:07.240050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:10.831 [2024-11-27 04:39:07.240058] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:22:10.831 [2024-11-27 04:39:07.240066] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:22:10.831 [2024-11-27 04:39:07.240073] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.831 [2024-11-27 04:39:07.240094] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:22:10.831 [2024-11-27 04:39:07.240111] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:22:10.831 [2024-11-27 04:39:07.240145] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:22:10.831 [2024-11-27 04:39:07.240160] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:22:10.831 [2024-11-27 04:39:07.240260] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:22:10.831 [2024-11-27 04:39:07.240270] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:22:10.831 [2024-11-27 04:39:07.240280] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:22:10.831 [2024-11-27 04:39:07.240292] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:22:10.831 [2024-11-27 04:39:07.240301] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:22:10.831 [2024-11-27 04:39:07.240309] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:22:10.831 [2024-11-27 04:39:07.240316] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:22:10.831 [2024-11-27 04:39:07.240323] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:22:10.831 [2024-11-27 04:39:07.240330] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:22:10.831 [2024-11-27 04:39:07.240338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:10.831 [2024-11-27 04:39:07.240345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:22:10.831 [2024-11-27 04:39:07.240352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.246 ms 00:22:10.831 [2024-11-27 04:39:07.240360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.831 [2024-11-27 04:39:07.240445] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:10.831 [2024-11-27 04:39:07.240456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:22:10.831 [2024-11-27 04:39:07.240464] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:22:10.831 [2024-11-27 04:39:07.240470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.831 [2024-11-27 04:39:07.240585] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:22:10.831 [2024-11-27 04:39:07.240596] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:22:10.831 [2024-11-27 04:39:07.240604] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:10.831 [2024-11-27 04:39:07.240611] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:10.831 [2024-11-27 04:39:07.240620] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:22:10.831 [2024-11-27 04:39:07.240627] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:22:10.831 [2024-11-27 04:39:07.240634] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:22:10.831 [2024-11-27 04:39:07.240641] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:22:10.831 [2024-11-27 04:39:07.240649] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:22:10.831 [2024-11-27 04:39:07.240655] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:10.832 [2024-11-27 04:39:07.240663] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:22:10.832 [2024-11-27 04:39:07.240676] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:22:10.832 [2024-11-27 04:39:07.240683] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:10.832 [2024-11-27 04:39:07.240689] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:22:10.832 [2024-11-27 04:39:07.240696] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:22:10.832 [2024-11-27 04:39:07.240702] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:10.832 [2024-11-27 04:39:07.240709] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:22:10.832 [2024-11-27 04:39:07.240716] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:22:10.832 [2024-11-27 04:39:07.240739] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:10.832 [2024-11-27 04:39:07.240748] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:22:10.832 [2024-11-27 04:39:07.240755] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:22:10.832 [2024-11-27 04:39:07.240762] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:10.832 [2024-11-27 04:39:07.240769] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:22:10.832 [2024-11-27 04:39:07.240776] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:22:10.832 [2024-11-27 04:39:07.240783] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:10.832 [2024-11-27 04:39:07.240790] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:22:10.832 [2024-11-27 04:39:07.240797] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:22:10.832 [2024-11-27 04:39:07.240803] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:10.832 [2024-11-27 04:39:07.240810] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:22:10.832 [2024-11-27 04:39:07.240817] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:22:10.832 [2024-11-27 04:39:07.240823] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:10.832 [2024-11-27 04:39:07.240830] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:22:10.832 [2024-11-27 04:39:07.240844] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:22:10.832 [2024-11-27 04:39:07.240850] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:10.832 [2024-11-27 04:39:07.240857] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:22:10.832 [2024-11-27 04:39:07.240863] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:22:10.832 [2024-11-27 04:39:07.240870] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:10.832 [2024-11-27 04:39:07.240876] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:22:10.832 [2024-11-27 04:39:07.240883] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:22:10.832 [2024-11-27 04:39:07.240890] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:10.832 [2024-11-27 04:39:07.240896] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:22:10.832 [2024-11-27 04:39:07.240903] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:22:10.832 [2024-11-27 04:39:07.240910] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:10.832 [2024-11-27 04:39:07.240917] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:22:10.832 [2024-11-27 04:39:07.240924] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:22:10.832 [2024-11-27 04:39:07.240934] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:10.832 [2024-11-27 04:39:07.240941] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:10.832 [2024-11-27 04:39:07.240949] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:22:10.832 [2024-11-27 04:39:07.240957] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:22:10.832 [2024-11-27 04:39:07.240963] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:22:10.832 [2024-11-27 04:39:07.240970] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:22:10.832 [2024-11-27 04:39:07.240977] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:22:10.832 [2024-11-27 04:39:07.240984] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:22:10.832 [2024-11-27 04:39:07.240992] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:22:10.832 [2024-11-27 04:39:07.241001] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:10.832 [2024-11-27 04:39:07.241009] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:22:10.832 [2024-11-27 04:39:07.241016] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:22:10.832 [2024-11-27 04:39:07.241023] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:22:10.832 [2024-11-27 04:39:07.241030] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:22:10.832 [2024-11-27 04:39:07.241037] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:22:10.832 [2024-11-27 04:39:07.241044] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:22:10.832 [2024-11-27 04:39:07.241051] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:22:10.832 [2024-11-27 04:39:07.241058] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:22:10.832 [2024-11-27 04:39:07.241065] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:22:10.832 [2024-11-27 04:39:07.241072] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:22:10.832 [2024-11-27 04:39:07.241079] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:22:10.832 [2024-11-27 04:39:07.241086] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:22:10.832 [2024-11-27 04:39:07.241093] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:22:10.832 [2024-11-27 04:39:07.241100] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:22:10.832 [2024-11-27 04:39:07.241107] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:22:10.832 [2024-11-27 04:39:07.241115] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:10.832 [2024-11-27 04:39:07.241124] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:10.832 [2024-11-27 04:39:07.241131] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:22:10.832 [2024-11-27 04:39:07.241138] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:22:10.832 [2024-11-27 04:39:07.241146] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:22:10.832 [2024-11-27 04:39:07.241153] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:10.832 [2024-11-27 04:39:07.241163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:22:10.832 [2024-11-27 04:39:07.241170] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.636 ms 00:22:10.832 [2024-11-27 04:39:07.241177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.832 [2024-11-27 04:39:07.266861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:10.832 [2024-11-27 04:39:07.267015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:10.832 [2024-11-27 04:39:07.267032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.632 ms 00:22:10.832 [2024-11-27 04:39:07.267041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.832 [2024-11-27 04:39:07.267171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:10.832 [2024-11-27 04:39:07.267181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:22:10.832 [2024-11-27 04:39:07.267190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:22:10.832 [2024-11-27 04:39:07.267197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.832 [2024-11-27 04:39:07.313198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:10.832 [2024-11-27 04:39:07.313239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:10.832 [2024-11-27 04:39:07.313253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.980 ms 00:22:10.832 [2024-11-27 04:39:07.313262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.832 [2024-11-27 04:39:07.313357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:10.832 [2024-11-27 04:39:07.313369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:10.832 [2024-11-27 04:39:07.313377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:22:10.832 [2024-11-27 04:39:07.313385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.832 [2024-11-27 04:39:07.313690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:10.832 [2024-11-27 04:39:07.313704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:10.832 [2024-11-27 04:39:07.313719] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.283 ms 00:22:10.832 [2024-11-27 04:39:07.313746] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.832 [2024-11-27 04:39:07.313873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:10.832 [2024-11-27 04:39:07.313882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:10.832 [2024-11-27 04:39:07.313890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.104 ms 00:22:10.832 [2024-11-27 04:39:07.313898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.832 [2024-11-27 04:39:07.327116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:10.832 [2024-11-27 04:39:07.327252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:10.832 [2024-11-27 04:39:07.327268] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.198 ms 00:22:10.833 [2024-11-27 04:39:07.327275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.833 [2024-11-27 04:39:07.339569] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:22:10.833 [2024-11-27 04:39:07.339601] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:22:10.833 [2024-11-27 04:39:07.339613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:10.833 [2024-11-27 04:39:07.339621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:22:10.833 [2024-11-27 04:39:07.339630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.238 ms 00:22:10.833 [2024-11-27 04:39:07.339636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.833 [2024-11-27 04:39:07.363597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:10.833 [2024-11-27 04:39:07.363644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:22:10.833 [2024-11-27 04:39:07.363656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.892 ms 00:22:10.833 [2024-11-27 04:39:07.363665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.833 [2024-11-27 04:39:07.374937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:10.833 [2024-11-27 04:39:07.374969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:22:10.833 [2024-11-27 04:39:07.374979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.214 ms 00:22:10.833 [2024-11-27 04:39:07.374987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.833 [2024-11-27 04:39:07.385992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:10.833 [2024-11-27 04:39:07.386121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:22:10.833 [2024-11-27 04:39:07.386136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.944 ms 00:22:10.833 [2024-11-27 04:39:07.386143] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.833 [2024-11-27 04:39:07.386766] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:10.833 [2024-11-27 04:39:07.386786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:22:10.833 [2024-11-27 04:39:07.386795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.540 ms 00:22:10.833 [2024-11-27 04:39:07.386803] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.091 [2024-11-27 04:39:07.440450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.091 [2024-11-27 04:39:07.440493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:22:11.091 [2024-11-27 04:39:07.440506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 53.623 ms 00:22:11.091 [2024-11-27 04:39:07.440514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.091 [2024-11-27 04:39:07.450635] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:22:11.091 [2024-11-27 04:39:07.464590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.091 [2024-11-27 04:39:07.464633] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:22:11.091 [2024-11-27 04:39:07.464646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.980 ms 00:22:11.091 [2024-11-27 04:39:07.464659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.091 [2024-11-27 04:39:07.464773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.091 [2024-11-27 04:39:07.464786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:22:11.091 [2024-11-27 04:39:07.464795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:22:11.091 [2024-11-27 04:39:07.464802] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.091 [2024-11-27 04:39:07.464856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.091 [2024-11-27 04:39:07.464865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:22:11.091 [2024-11-27 04:39:07.464874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:22:11.091 [2024-11-27 04:39:07.464884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.091 [2024-11-27 04:39:07.464911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.091 [2024-11-27 04:39:07.464919] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:22:11.091 [2024-11-27 04:39:07.464927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:22:11.091 [2024-11-27 04:39:07.464934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.091 [2024-11-27 04:39:07.464967] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:22:11.091 [2024-11-27 04:39:07.464976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.091 [2024-11-27 04:39:07.464983] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:22:11.091 [2024-11-27 04:39:07.464991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:22:11.091 [2024-11-27 04:39:07.464998] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.091 [2024-11-27 04:39:07.488115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.091 [2024-11-27 04:39:07.488153] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:22:11.091 [2024-11-27 04:39:07.488166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.098 ms 00:22:11.091 [2024-11-27 04:39:07.488174] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.091 [2024-11-27 04:39:07.488264] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.091 [2024-11-27 04:39:07.488274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:22:11.091 [2024-11-27 04:39:07.488283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:22:11.091 [2024-11-27 04:39:07.488291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.091 [2024-11-27 04:39:07.489068] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:11.091 [2024-11-27 04:39:07.491931] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 274.483 ms, result 0 00:22:11.091 [2024-11-27 04:39:07.492883] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:11.091 [2024-11-27 04:39:07.505714] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:11.091  [2024-11-27T04:39:07.678Z] Copying: 4096/4096 [kB] (average 40 MBps)[2024-11-27 04:39:07.606111] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:11.091 [2024-11-27 04:39:07.615027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.091 [2024-11-27 04:39:07.615058] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:22:11.091 [2024-11-27 04:39:07.615075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:22:11.091 [2024-11-27 04:39:07.615083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.091 [2024-11-27 04:39:07.615104] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:22:11.091 [2024-11-27 04:39:07.617671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.091 [2024-11-27 04:39:07.617694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:22:11.092 [2024-11-27 04:39:07.617705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.555 ms 00:22:11.092 [2024-11-27 04:39:07.617714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.092 [2024-11-27 04:39:07.618980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.092 [2024-11-27 04:39:07.619007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:22:11.092 [2024-11-27 04:39:07.619017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.237 ms 00:22:11.092 [2024-11-27 04:39:07.619024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.092 [2024-11-27 04:39:07.622821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.092 [2024-11-27 04:39:07.622844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:22:11.092 [2024-11-27 04:39:07.622853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.777 ms 00:22:11.092 [2024-11-27 04:39:07.622861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.092 [2024-11-27 04:39:07.630010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.092 [2024-11-27 04:39:07.630033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:22:11.092 [2024-11-27 04:39:07.630042] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.125 ms 00:22:11.092 [2024-11-27 04:39:07.630051] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.092 [2024-11-27 04:39:07.653029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.092 [2024-11-27 04:39:07.653057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:22:11.092 [2024-11-27 04:39:07.653068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.925 ms 00:22:11.092 [2024-11-27 04:39:07.653075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.092 [2024-11-27 04:39:07.666791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.092 [2024-11-27 04:39:07.666823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:22:11.092 [2024-11-27 04:39:07.666834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.682 ms 00:22:11.092 [2024-11-27 04:39:07.666843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.092 [2024-11-27 04:39:07.666962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.092 [2024-11-27 04:39:07.666971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:22:11.092 [2024-11-27 04:39:07.666987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.085 ms 00:22:11.092 [2024-11-27 04:39:07.666995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.350 [2024-11-27 04:39:07.690260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.350 [2024-11-27 04:39:07.690290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:22:11.350 [2024-11-27 04:39:07.690299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.248 ms 00:22:11.350 [2024-11-27 04:39:07.690307] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.350 [2024-11-27 04:39:07.713091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.350 [2024-11-27 04:39:07.713121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:22:11.350 [2024-11-27 04:39:07.713132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.749 ms 00:22:11.350 [2024-11-27 04:39:07.713140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.350 [2024-11-27 04:39:07.735069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.350 [2024-11-27 04:39:07.735097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:22:11.350 [2024-11-27 04:39:07.735108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.895 ms 00:22:11.350 [2024-11-27 04:39:07.735117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.350 [2024-11-27 04:39:07.757216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.350 [2024-11-27 04:39:07.757242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:22:11.350 [2024-11-27 04:39:07.757251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.042 ms 00:22:11.350 [2024-11-27 04:39:07.757258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.350 [2024-11-27 04:39:07.757290] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:22:11.350 [2024-11-27 04:39:07.757304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:22:11.350 [2024-11-27 04:39:07.757315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:22:11.350 [2024-11-27 04:39:07.757323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:22:11.350 [2024-11-27 04:39:07.757331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:11.350 [2024-11-27 04:39:07.757339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:11.350 [2024-11-27 04:39:07.757347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:11.350 [2024-11-27 04:39:07.757354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:11.350 [2024-11-27 04:39:07.757362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:11.350 [2024-11-27 04:39:07.757370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:11.350 [2024-11-27 04:39:07.757378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:11.350 [2024-11-27 04:39:07.757386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:11.350 [2024-11-27 04:39:07.757394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:11.350 [2024-11-27 04:39:07.757401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:11.350 [2024-11-27 04:39:07.757409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:11.350 [2024-11-27 04:39:07.757416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:11.350 [2024-11-27 04:39:07.757423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:11.350 [2024-11-27 04:39:07.757430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:11.350 [2024-11-27 04:39:07.757438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:11.350 [2024-11-27 04:39:07.757445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:22:11.350 [2024-11-27 04:39:07.757453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:22:11.350 [2024-11-27 04:39:07.757460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:22:11.350 [2024-11-27 04:39:07.757467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:22:11.350 [2024-11-27 04:39:07.757474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:22:11.350 [2024-11-27 04:39:07.757482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:22:11.350 [2024-11-27 04:39:07.757489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:22:11.350 [2024-11-27 04:39:07.757497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:22:11.350 [2024-11-27 04:39:07.757505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:22:11.350 [2024-11-27 04:39:07.757512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:22:11.350 [2024-11-27 04:39:07.757520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:22:11.350 [2024-11-27 04:39:07.757529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:22:11.350 [2024-11-27 04:39:07.757536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:22:11.350 [2024-11-27 04:39:07.757544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:22:11.350 [2024-11-27 04:39:07.757551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:22:11.350 [2024-11-27 04:39:07.757559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:22:11.350 [2024-11-27 04:39:07.757566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:22:11.350 [2024-11-27 04:39:07.757574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:22:11.350 [2024-11-27 04:39:07.757581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:22:11.350 [2024-11-27 04:39:07.757588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:22:11.350 [2024-11-27 04:39:07.757596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:22:11.350 [2024-11-27 04:39:07.757603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:22:11.351 [2024-11-27 04:39:07.757610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:22:11.351 [2024-11-27 04:39:07.757617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:22:11.351 [2024-11-27 04:39:07.757625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:22:11.351 [2024-11-27 04:39:07.757632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:22:11.351 [2024-11-27 04:39:07.757640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:22:11.351 [2024-11-27 04:39:07.757647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:22:11.351 [2024-11-27 04:39:07.757654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:22:11.351 [2024-11-27 04:39:07.757661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:22:11.351 [2024-11-27 04:39:07.757668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:22:11.351 [2024-11-27 04:39:07.757675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:22:11.351 [2024-11-27 04:39:07.757683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:22:11.351 [2024-11-27 04:39:07.757690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:22:11.351 [2024-11-27 04:39:07.757698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:22:11.351 [2024-11-27 04:39:07.757705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:22:11.351 [2024-11-27 04:39:07.757712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:22:11.351 [2024-11-27 04:39:07.757720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:22:11.351 [2024-11-27 04:39:07.757742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:22:11.351 [2024-11-27 04:39:07.757749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:22:11.351 [2024-11-27 04:39:07.757757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:22:11.351 [2024-11-27 04:39:07.757765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:22:11.351 [2024-11-27 04:39:07.757772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:22:11.351 [2024-11-27 04:39:07.757781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:22:11.351 [2024-11-27 04:39:07.757788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:22:11.351 [2024-11-27 04:39:07.757796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:22:11.351 [2024-11-27 04:39:07.757804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:22:11.351 [2024-11-27 04:39:07.757811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:22:11.351 [2024-11-27 04:39:07.757818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:22:11.351 [2024-11-27 04:39:07.757826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:22:11.351 [2024-11-27 04:39:07.757834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:22:11.351 [2024-11-27 04:39:07.757841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:22:11.351 [2024-11-27 04:39:07.757848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:22:11.351 [2024-11-27 04:39:07.757856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:22:11.351 [2024-11-27 04:39:07.757864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:22:11.351 [2024-11-27 04:39:07.757871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:22:11.351 [2024-11-27 04:39:07.757879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:22:11.351 [2024-11-27 04:39:07.757886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:22:11.351 [2024-11-27 04:39:07.757894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:22:11.351 [2024-11-27 04:39:07.757901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:22:11.351 [2024-11-27 04:39:07.757909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:22:11.351 [2024-11-27 04:39:07.757916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:22:11.351 [2024-11-27 04:39:07.757924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:22:11.351 [2024-11-27 04:39:07.757932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:22:11.351 [2024-11-27 04:39:07.757939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:22:11.351 [2024-11-27 04:39:07.757947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:22:11.351 [2024-11-27 04:39:07.757955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:22:11.351 [2024-11-27 04:39:07.757962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:22:11.351 [2024-11-27 04:39:07.757969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:22:11.351 [2024-11-27 04:39:07.757977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:22:11.351 [2024-11-27 04:39:07.757984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:22:11.351 [2024-11-27 04:39:07.757992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:22:11.351 [2024-11-27 04:39:07.757999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:22:11.351 [2024-11-27 04:39:07.758006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:22:11.351 [2024-11-27 04:39:07.758014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:22:11.351 [2024-11-27 04:39:07.758028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:22:11.351 [2024-11-27 04:39:07.758036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:22:11.351 [2024-11-27 04:39:07.758043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:22:11.351 [2024-11-27 04:39:07.758050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:22:11.351 [2024-11-27 04:39:07.758058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:22:11.351 [2024-11-27 04:39:07.758065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:22:11.351 [2024-11-27 04:39:07.758073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:22:11.351 [2024-11-27 04:39:07.758088] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:22:11.351 [2024-11-27 04:39:07.758096] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: b290c8a9-93b3-40e8-884a-c7cbc275854f 00:22:11.351 [2024-11-27 04:39:07.758104] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:22:11.351 [2024-11-27 04:39:07.758111] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:22:11.351 [2024-11-27 04:39:07.758118] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:22:11.351 [2024-11-27 04:39:07.758126] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:22:11.351 [2024-11-27 04:39:07.758133] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:22:11.351 [2024-11-27 04:39:07.758140] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:22:11.351 [2024-11-27 04:39:07.758149] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:22:11.351 [2024-11-27 04:39:07.758155] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:22:11.351 [2024-11-27 04:39:07.758162] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:22:11.351 [2024-11-27 04:39:07.758169] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.351 [2024-11-27 04:39:07.758176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:22:11.351 [2024-11-27 04:39:07.758184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.879 ms 00:22:11.351 [2024-11-27 04:39:07.758191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.351 [2024-11-27 04:39:07.770413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.351 [2024-11-27 04:39:07.770438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:22:11.351 [2024-11-27 04:39:07.770448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.205 ms 00:22:11.351 [2024-11-27 04:39:07.770456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.351 [2024-11-27 04:39:07.770824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.351 [2024-11-27 04:39:07.770838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:22:11.351 [2024-11-27 04:39:07.770847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.335 ms 00:22:11.351 [2024-11-27 04:39:07.770854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.351 [2024-11-27 04:39:07.805560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:11.351 [2024-11-27 04:39:07.805595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:11.351 [2024-11-27 04:39:07.805605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:11.351 [2024-11-27 04:39:07.805616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.351 [2024-11-27 04:39:07.805692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:11.351 [2024-11-27 04:39:07.805700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:11.351 [2024-11-27 04:39:07.805708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:11.351 [2024-11-27 04:39:07.805715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.351 [2024-11-27 04:39:07.805765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:11.351 [2024-11-27 04:39:07.805775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:11.351 [2024-11-27 04:39:07.805783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:11.352 [2024-11-27 04:39:07.805790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.352 [2024-11-27 04:39:07.805810] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:11.352 [2024-11-27 04:39:07.805818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:11.352 [2024-11-27 04:39:07.805825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:11.352 [2024-11-27 04:39:07.805833] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.352 [2024-11-27 04:39:07.883329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:11.352 [2024-11-27 04:39:07.883391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:11.352 [2024-11-27 04:39:07.883403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:11.352 [2024-11-27 04:39:07.883415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.610 [2024-11-27 04:39:07.946336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:11.610 [2024-11-27 04:39:07.946378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:11.610 [2024-11-27 04:39:07.946389] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:11.610 [2024-11-27 04:39:07.946397] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.610 [2024-11-27 04:39:07.946449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:11.610 [2024-11-27 04:39:07.946458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:11.610 [2024-11-27 04:39:07.946466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:11.610 [2024-11-27 04:39:07.946473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.610 [2024-11-27 04:39:07.946500] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:11.610 [2024-11-27 04:39:07.946512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:11.610 [2024-11-27 04:39:07.946520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:11.610 [2024-11-27 04:39:07.946527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.610 [2024-11-27 04:39:07.946615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:11.610 [2024-11-27 04:39:07.946625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:11.610 [2024-11-27 04:39:07.946633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:11.610 [2024-11-27 04:39:07.946640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.610 [2024-11-27 04:39:07.946669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:11.610 [2024-11-27 04:39:07.946678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:22:11.610 [2024-11-27 04:39:07.946688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:11.610 [2024-11-27 04:39:07.946696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.610 [2024-11-27 04:39:07.946748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:11.610 [2024-11-27 04:39:07.946759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:11.610 [2024-11-27 04:39:07.946766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:11.610 [2024-11-27 04:39:07.946774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.610 [2024-11-27 04:39:07.946812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:11.610 [2024-11-27 04:39:07.946824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:11.610 [2024-11-27 04:39:07.946832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:11.610 [2024-11-27 04:39:07.946839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.610 [2024-11-27 04:39:07.946961] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 331.926 ms, result 0 00:22:12.175 00:22:12.175 00:22:12.175 04:39:08 ftl.ftl_trim -- ftl/trim.sh@93 -- # svcpid=76739 00:22:12.175 04:39:08 ftl.ftl_trim -- ftl/trim.sh@94 -- # waitforlisten 76739 00:22:12.175 04:39:08 ftl.ftl_trim -- ftl/trim.sh@92 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:22:12.175 04:39:08 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 76739 ']' 00:22:12.175 04:39:08 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:12.175 04:39:08 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:12.175 04:39:08 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:12.175 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:12.175 04:39:08 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:12.175 04:39:08 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:22:12.175 [2024-11-27 04:39:08.740541] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:22:12.175 [2024-11-27 04:39:08.740654] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76739 ] 00:22:12.434 [2024-11-27 04:39:08.898364] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:12.434 [2024-11-27 04:39:08.997976] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:13.083 04:39:09 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:13.083 04:39:09 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:22:13.083 04:39:09 ftl.ftl_trim -- ftl/trim.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:22:13.341 [2024-11-27 04:39:09.781924] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:13.341 [2024-11-27 04:39:09.781982] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:13.601 [2024-11-27 04:39:09.951616] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.601 [2024-11-27 04:39:09.951659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:22:13.601 [2024-11-27 04:39:09.951673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:13.601 [2024-11-27 04:39:09.951681] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.601 [2024-11-27 04:39:09.956407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.601 [2024-11-27 04:39:09.956491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:13.601 [2024-11-27 04:39:09.956522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.698 ms 00:22:13.601 [2024-11-27 04:39:09.956542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.602 [2024-11-27 04:39:09.956829] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:22:13.602 [2024-11-27 04:39:09.958828] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:22:13.602 [2024-11-27 04:39:09.958894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.602 [2024-11-27 04:39:09.958917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:13.602 [2024-11-27 04:39:09.958943] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.091 ms 00:22:13.602 [2024-11-27 04:39:09.958967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.602 [2024-11-27 04:39:09.961327] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:22:13.602 [2024-11-27 04:39:09.975734] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.602 [2024-11-27 04:39:09.975767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:22:13.602 [2024-11-27 04:39:09.975780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.424 ms 00:22:13.602 [2024-11-27 04:39:09.975790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.602 [2024-11-27 04:39:09.975867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.602 [2024-11-27 04:39:09.975882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:22:13.602 [2024-11-27 04:39:09.975890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:22:13.602 [2024-11-27 04:39:09.975899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.602 [2024-11-27 04:39:09.980541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.602 [2024-11-27 04:39:09.980572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:13.602 [2024-11-27 04:39:09.980581] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.595 ms 00:22:13.602 [2024-11-27 04:39:09.980590] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.602 [2024-11-27 04:39:09.980687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.602 [2024-11-27 04:39:09.980699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:13.602 [2024-11-27 04:39:09.980707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:22:13.602 [2024-11-27 04:39:09.980719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.602 [2024-11-27 04:39:09.980759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.602 [2024-11-27 04:39:09.980769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:22:13.602 [2024-11-27 04:39:09.980777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:22:13.602 [2024-11-27 04:39:09.980785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.602 [2024-11-27 04:39:09.980809] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:22:13.602 [2024-11-27 04:39:09.983943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.602 [2024-11-27 04:39:09.983966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:13.602 [2024-11-27 04:39:09.983976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.138 ms 00:22:13.602 [2024-11-27 04:39:09.983984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.602 [2024-11-27 04:39:09.984019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.602 [2024-11-27 04:39:09.984027] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:22:13.602 [2024-11-27 04:39:09.984039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:22:13.602 [2024-11-27 04:39:09.984045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.602 [2024-11-27 04:39:09.984066] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:22:13.602 [2024-11-27 04:39:09.984083] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:22:13.602 [2024-11-27 04:39:09.984123] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:22:13.602 [2024-11-27 04:39:09.984137] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:22:13.602 [2024-11-27 04:39:09.984240] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:22:13.602 [2024-11-27 04:39:09.984251] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:22:13.602 [2024-11-27 04:39:09.984266] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:22:13.602 [2024-11-27 04:39:09.984276] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:22:13.602 [2024-11-27 04:39:09.984286] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:22:13.602 [2024-11-27 04:39:09.984294] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:22:13.602 [2024-11-27 04:39:09.984302] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:22:13.602 [2024-11-27 04:39:09.984310] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:22:13.602 [2024-11-27 04:39:09.984319] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:22:13.602 [2024-11-27 04:39:09.984327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.602 [2024-11-27 04:39:09.984335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:22:13.602 [2024-11-27 04:39:09.984343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.264 ms 00:22:13.602 [2024-11-27 04:39:09.984361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.602 [2024-11-27 04:39:09.984447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.602 [2024-11-27 04:39:09.984456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:22:13.602 [2024-11-27 04:39:09.984463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:22:13.602 [2024-11-27 04:39:09.984471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.602 [2024-11-27 04:39:09.984569] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:22:13.602 [2024-11-27 04:39:09.984579] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:22:13.602 [2024-11-27 04:39:09.984587] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:13.602 [2024-11-27 04:39:09.984596] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:13.602 [2024-11-27 04:39:09.984603] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:22:13.602 [2024-11-27 04:39:09.984611] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:22:13.602 [2024-11-27 04:39:09.984618] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:22:13.602 [2024-11-27 04:39:09.984630] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:22:13.602 [2024-11-27 04:39:09.984637] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:22:13.602 [2024-11-27 04:39:09.984645] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:13.602 [2024-11-27 04:39:09.984652] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:22:13.602 [2024-11-27 04:39:09.984660] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:22:13.602 [2024-11-27 04:39:09.984667] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:13.602 [2024-11-27 04:39:09.984675] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:22:13.602 [2024-11-27 04:39:09.984681] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:22:13.602 [2024-11-27 04:39:09.984689] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:13.602 [2024-11-27 04:39:09.984696] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:22:13.602 [2024-11-27 04:39:09.984704] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:22:13.602 [2024-11-27 04:39:09.984716] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:13.602 [2024-11-27 04:39:09.984736] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:22:13.602 [2024-11-27 04:39:09.984743] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:22:13.602 [2024-11-27 04:39:09.984751] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:13.602 [2024-11-27 04:39:09.984757] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:22:13.602 [2024-11-27 04:39:09.984767] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:22:13.602 [2024-11-27 04:39:09.984773] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:13.602 [2024-11-27 04:39:09.984781] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:22:13.602 [2024-11-27 04:39:09.984788] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:22:13.602 [2024-11-27 04:39:09.984795] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:13.602 [2024-11-27 04:39:09.984802] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:22:13.602 [2024-11-27 04:39:09.984810] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:22:13.602 [2024-11-27 04:39:09.984816] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:13.602 [2024-11-27 04:39:09.984824] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:22:13.602 [2024-11-27 04:39:09.984831] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:22:13.602 [2024-11-27 04:39:09.984848] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:13.602 [2024-11-27 04:39:09.984855] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:22:13.602 [2024-11-27 04:39:09.984863] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:22:13.602 [2024-11-27 04:39:09.984869] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:13.602 [2024-11-27 04:39:09.984877] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:22:13.602 [2024-11-27 04:39:09.984884] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:22:13.602 [2024-11-27 04:39:09.984893] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:13.602 [2024-11-27 04:39:09.984899] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:22:13.602 [2024-11-27 04:39:09.984907] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:22:13.602 [2024-11-27 04:39:09.984914] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:13.602 [2024-11-27 04:39:09.984922] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:22:13.602 [2024-11-27 04:39:09.984931] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:22:13.602 [2024-11-27 04:39:09.984940] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:13.602 [2024-11-27 04:39:09.984947] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:13.603 [2024-11-27 04:39:09.984956] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:22:13.603 [2024-11-27 04:39:09.984963] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:22:13.603 [2024-11-27 04:39:09.984971] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:22:13.603 [2024-11-27 04:39:09.984978] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:22:13.603 [2024-11-27 04:39:09.984986] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:22:13.603 [2024-11-27 04:39:09.984993] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:22:13.603 [2024-11-27 04:39:09.985002] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:22:13.603 [2024-11-27 04:39:09.985011] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:13.603 [2024-11-27 04:39:09.985023] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:22:13.603 [2024-11-27 04:39:09.985030] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:22:13.603 [2024-11-27 04:39:09.985039] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:22:13.603 [2024-11-27 04:39:09.985046] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:22:13.603 [2024-11-27 04:39:09.985055] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:22:13.603 [2024-11-27 04:39:09.985062] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:22:13.603 [2024-11-27 04:39:09.985070] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:22:13.603 [2024-11-27 04:39:09.985077] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:22:13.603 [2024-11-27 04:39:09.985085] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:22:13.603 [2024-11-27 04:39:09.985092] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:22:13.603 [2024-11-27 04:39:09.985101] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:22:13.603 [2024-11-27 04:39:09.985107] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:22:13.603 [2024-11-27 04:39:09.985116] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:22:13.603 [2024-11-27 04:39:09.985123] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:22:13.603 [2024-11-27 04:39:09.985132] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:22:13.603 [2024-11-27 04:39:09.985139] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:13.603 [2024-11-27 04:39:09.985151] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:13.603 [2024-11-27 04:39:09.985158] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:22:13.603 [2024-11-27 04:39:09.985167] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:22:13.603 [2024-11-27 04:39:09.985174] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:22:13.603 [2024-11-27 04:39:09.985183] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.603 [2024-11-27 04:39:09.985190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:22:13.603 [2024-11-27 04:39:09.985199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.680 ms 00:22:13.603 [2024-11-27 04:39:09.985208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.603 [2024-11-27 04:39:10.011160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.603 [2024-11-27 04:39:10.011195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:13.603 [2024-11-27 04:39:10.011209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.882 ms 00:22:13.603 [2024-11-27 04:39:10.011221] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.603 [2024-11-27 04:39:10.011339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.603 [2024-11-27 04:39:10.011349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:22:13.603 [2024-11-27 04:39:10.011358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:22:13.603 [2024-11-27 04:39:10.011366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.603 [2024-11-27 04:39:10.042395] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.603 [2024-11-27 04:39:10.042428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:13.603 [2024-11-27 04:39:10.042439] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.005 ms 00:22:13.603 [2024-11-27 04:39:10.042447] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.603 [2024-11-27 04:39:10.042506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.603 [2024-11-27 04:39:10.042515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:13.603 [2024-11-27 04:39:10.042525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:22:13.603 [2024-11-27 04:39:10.042532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.603 [2024-11-27 04:39:10.042854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.603 [2024-11-27 04:39:10.042868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:13.603 [2024-11-27 04:39:10.042880] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.300 ms 00:22:13.603 [2024-11-27 04:39:10.042887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.603 [2024-11-27 04:39:10.043010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.603 [2024-11-27 04:39:10.043018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:13.603 [2024-11-27 04:39:10.043028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.100 ms 00:22:13.603 [2024-11-27 04:39:10.043035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.603 [2024-11-27 04:39:10.056991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.603 [2024-11-27 04:39:10.057016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:13.603 [2024-11-27 04:39:10.057028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.917 ms 00:22:13.603 [2024-11-27 04:39:10.057035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.603 [2024-11-27 04:39:10.082360] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:22:13.603 [2024-11-27 04:39:10.082405] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:22:13.603 [2024-11-27 04:39:10.082428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.603 [2024-11-27 04:39:10.082440] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:22:13.603 [2024-11-27 04:39:10.082455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.296 ms 00:22:13.603 [2024-11-27 04:39:10.082472] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.603 [2024-11-27 04:39:10.107494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.603 [2024-11-27 04:39:10.107526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:22:13.603 [2024-11-27 04:39:10.107539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.925 ms 00:22:13.603 [2024-11-27 04:39:10.107547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.603 [2024-11-27 04:39:10.119198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.603 [2024-11-27 04:39:10.119226] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:22:13.603 [2024-11-27 04:39:10.119240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.576 ms 00:22:13.603 [2024-11-27 04:39:10.119247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.603 [2024-11-27 04:39:10.130440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.603 [2024-11-27 04:39:10.130468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:22:13.603 [2024-11-27 04:39:10.130480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.127 ms 00:22:13.603 [2024-11-27 04:39:10.130489] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.603 [2024-11-27 04:39:10.131125] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.603 [2024-11-27 04:39:10.131142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:22:13.603 [2024-11-27 04:39:10.131152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.544 ms 00:22:13.603 [2024-11-27 04:39:10.131160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.861 [2024-11-27 04:39:10.185421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.861 [2024-11-27 04:39:10.185467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:22:13.861 [2024-11-27 04:39:10.185482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 54.236 ms 00:22:13.861 [2024-11-27 04:39:10.185490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.861 [2024-11-27 04:39:10.195989] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:22:13.861 [2024-11-27 04:39:10.209637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.861 [2024-11-27 04:39:10.209682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:22:13.861 [2024-11-27 04:39:10.209693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.058 ms 00:22:13.861 [2024-11-27 04:39:10.209702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.861 [2024-11-27 04:39:10.209796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.861 [2024-11-27 04:39:10.209808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:22:13.861 [2024-11-27 04:39:10.209817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:22:13.861 [2024-11-27 04:39:10.209827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.861 [2024-11-27 04:39:10.209874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.861 [2024-11-27 04:39:10.209884] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:22:13.861 [2024-11-27 04:39:10.209892] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:22:13.861 [2024-11-27 04:39:10.209902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.861 [2024-11-27 04:39:10.209925] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.861 [2024-11-27 04:39:10.209935] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:22:13.861 [2024-11-27 04:39:10.209943] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:13.861 [2024-11-27 04:39:10.209954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.861 [2024-11-27 04:39:10.209985] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:22:13.861 [2024-11-27 04:39:10.209997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.861 [2024-11-27 04:39:10.210007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:22:13.861 [2024-11-27 04:39:10.210016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:22:13.861 [2024-11-27 04:39:10.210025] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.861 [2024-11-27 04:39:10.233024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.861 [2024-11-27 04:39:10.233056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:22:13.861 [2024-11-27 04:39:10.233069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.976 ms 00:22:13.861 [2024-11-27 04:39:10.233078] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.861 [2024-11-27 04:39:10.233166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.861 [2024-11-27 04:39:10.233177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:22:13.861 [2024-11-27 04:39:10.233189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:22:13.861 [2024-11-27 04:39:10.233196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.861 [2024-11-27 04:39:10.234322] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:13.861 [2024-11-27 04:39:10.237222] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 282.437 ms, result 0 00:22:13.861 [2024-11-27 04:39:10.237995] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:13.861 Some configs were skipped because the RPC state that can call them passed over. 00:22:13.862 04:39:10 ftl.ftl_trim -- ftl/trim.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:22:14.122 [2024-11-27 04:39:10.457300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:14.122 [2024-11-27 04:39:10.457358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:22:14.122 [2024-11-27 04:39:10.457371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.372 ms 00:22:14.122 [2024-11-27 04:39:10.457381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:14.122 [2024-11-27 04:39:10.457413] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.490 ms, result 0 00:22:14.122 true 00:22:14.122 04:39:10 ftl.ftl_trim -- ftl/trim.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:22:14.122 [2024-11-27 04:39:10.649006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:14.122 [2024-11-27 04:39:10.649057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:22:14.122 [2024-11-27 04:39:10.649072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.848 ms 00:22:14.122 [2024-11-27 04:39:10.649080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:14.122 [2024-11-27 04:39:10.649115] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 0.961 ms, result 0 00:22:14.122 true 00:22:14.122 04:39:10 ftl.ftl_trim -- ftl/trim.sh@102 -- # killprocess 76739 00:22:14.122 04:39:10 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 76739 ']' 00:22:14.122 04:39:10 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 76739 00:22:14.122 04:39:10 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:22:14.122 04:39:10 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:14.122 04:39:10 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76739 00:22:14.122 killing process with pid 76739 00:22:14.122 04:39:10 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:14.122 04:39:10 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:14.122 04:39:10 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76739' 00:22:14.122 04:39:10 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 76739 00:22:14.122 04:39:10 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 76739 00:22:15.059 [2024-11-27 04:39:11.376052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:15.059 [2024-11-27 04:39:11.376111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:22:15.059 [2024-11-27 04:39:11.376124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:22:15.059 [2024-11-27 04:39:11.376133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.059 [2024-11-27 04:39:11.376157] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:22:15.059 [2024-11-27 04:39:11.378742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:15.059 [2024-11-27 04:39:11.378775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:22:15.059 [2024-11-27 04:39:11.378789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.551 ms 00:22:15.059 [2024-11-27 04:39:11.378797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.059 [2024-11-27 04:39:11.379093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:15.059 [2024-11-27 04:39:11.379115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:22:15.059 [2024-11-27 04:39:11.379125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.255 ms 00:22:15.059 [2024-11-27 04:39:11.379132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.059 [2024-11-27 04:39:11.383023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:15.059 [2024-11-27 04:39:11.383055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:22:15.059 [2024-11-27 04:39:11.383065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.870 ms 00:22:15.059 [2024-11-27 04:39:11.383073] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.059 [2024-11-27 04:39:11.390036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:15.059 [2024-11-27 04:39:11.390068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:22:15.059 [2024-11-27 04:39:11.390079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.912 ms 00:22:15.059 [2024-11-27 04:39:11.390088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.059 [2024-11-27 04:39:11.399349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:15.059 [2024-11-27 04:39:11.399386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:22:15.059 [2024-11-27 04:39:11.399399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.205 ms 00:22:15.059 [2024-11-27 04:39:11.399406] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.059 [2024-11-27 04:39:11.406335] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:15.059 [2024-11-27 04:39:11.406370] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:22:15.059 [2024-11-27 04:39:11.406381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.891 ms 00:22:15.059 [2024-11-27 04:39:11.406389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.059 [2024-11-27 04:39:11.406522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:15.059 [2024-11-27 04:39:11.406532] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:22:15.059 [2024-11-27 04:39:11.406542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.089 ms 00:22:15.059 [2024-11-27 04:39:11.406549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.059 [2024-11-27 04:39:11.415984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:15.059 [2024-11-27 04:39:11.416016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:22:15.059 [2024-11-27 04:39:11.416027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.414 ms 00:22:15.059 [2024-11-27 04:39:11.416034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.059 [2024-11-27 04:39:11.425186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:15.059 [2024-11-27 04:39:11.425215] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:22:15.059 [2024-11-27 04:39:11.425228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.116 ms 00:22:15.059 [2024-11-27 04:39:11.425236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.059 [2024-11-27 04:39:11.434015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:15.059 [2024-11-27 04:39:11.434045] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:22:15.059 [2024-11-27 04:39:11.434056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.741 ms 00:22:15.059 [2024-11-27 04:39:11.434063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.059 [2024-11-27 04:39:11.442695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:15.059 [2024-11-27 04:39:11.442731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:22:15.059 [2024-11-27 04:39:11.442742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.559 ms 00:22:15.059 [2024-11-27 04:39:11.442749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.059 [2024-11-27 04:39:11.442782] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:22:15.059 [2024-11-27 04:39:11.442795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:22:15.059 [2024-11-27 04:39:11.442809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:22:15.059 [2024-11-27 04:39:11.442816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:22:15.059 [2024-11-27 04:39:11.442825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:15.059 [2024-11-27 04:39:11.442832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:15.059 [2024-11-27 04:39:11.442843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:15.059 [2024-11-27 04:39:11.442850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:15.059 [2024-11-27 04:39:11.442859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:15.059 [2024-11-27 04:39:11.442866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:15.059 [2024-11-27 04:39:11.442875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:15.059 [2024-11-27 04:39:11.442882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:15.059 [2024-11-27 04:39:11.442891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:15.059 [2024-11-27 04:39:11.442899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:15.059 [2024-11-27 04:39:11.442907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:15.059 [2024-11-27 04:39:11.442915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:15.059 [2024-11-27 04:39:11.442925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:15.059 [2024-11-27 04:39:11.442932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:15.059 [2024-11-27 04:39:11.442941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:15.060 [2024-11-27 04:39:11.442948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:22:15.060 [2024-11-27 04:39:11.442956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:22:15.060 [2024-11-27 04:39:11.442963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:22:15.060 [2024-11-27 04:39:11.442974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:22:15.060 [2024-11-27 04:39:11.442981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:22:15.060 [2024-11-27 04:39:11.442989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:22:15.060 [2024-11-27 04:39:11.442996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:22:15.060 [2024-11-27 04:39:11.443005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:22:15.060 [2024-11-27 04:39:11.443013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:22:15.060 [2024-11-27 04:39:11.443022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:22:15.060 [2024-11-27 04:39:11.443029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:22:15.060 [2024-11-27 04:39:11.443038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:22:15.060 [2024-11-27 04:39:11.443047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:22:15.060 [2024-11-27 04:39:11.443056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:22:15.060 [2024-11-27 04:39:11.443063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:22:15.060 [2024-11-27 04:39:11.443071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:22:15.060 [2024-11-27 04:39:11.443078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:22:15.060 [2024-11-27 04:39:11.443087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:22:15.060 [2024-11-27 04:39:11.443094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:22:15.060 [2024-11-27 04:39:11.443105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:22:15.060 [2024-11-27 04:39:11.443112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:22:15.060 [2024-11-27 04:39:11.443121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:22:15.060 [2024-11-27 04:39:11.443128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:22:15.060 [2024-11-27 04:39:11.443137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:22:15.060 [2024-11-27 04:39:11.443145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:22:15.060 [2024-11-27 04:39:11.443154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:22:15.060 [2024-11-27 04:39:11.443161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:22:15.060 [2024-11-27 04:39:11.443170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:22:15.060 [2024-11-27 04:39:11.443177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:22:15.060 [2024-11-27 04:39:11.443186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:22:15.060 [2024-11-27 04:39:11.443193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:22:15.060 [2024-11-27 04:39:11.443202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:22:15.060 [2024-11-27 04:39:11.443209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:22:15.060 [2024-11-27 04:39:11.443218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:22:15.060 [2024-11-27 04:39:11.443225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:22:15.060 [2024-11-27 04:39:11.443235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:22:15.060 [2024-11-27 04:39:11.443242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:22:15.060 [2024-11-27 04:39:11.443251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:22:15.060 [2024-11-27 04:39:11.443258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:22:15.060 [2024-11-27 04:39:11.443267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:22:15.060 [2024-11-27 04:39:11.443274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:22:15.060 [2024-11-27 04:39:11.443284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:22:15.060 [2024-11-27 04:39:11.443291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:22:15.060 [2024-11-27 04:39:11.443299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:22:15.060 [2024-11-27 04:39:11.443307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:22:15.060 [2024-11-27 04:39:11.443315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:22:15.060 [2024-11-27 04:39:11.443323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:22:15.060 [2024-11-27 04:39:11.443332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:22:15.060 [2024-11-27 04:39:11.443339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:22:15.060 [2024-11-27 04:39:11.443347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:22:15.060 [2024-11-27 04:39:11.443354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:22:15.060 [2024-11-27 04:39:11.443366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:22:15.060 [2024-11-27 04:39:11.443374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:22:15.060 [2024-11-27 04:39:11.443382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:22:15.060 [2024-11-27 04:39:11.443389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:22:15.060 [2024-11-27 04:39:11.443398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:22:15.060 [2024-11-27 04:39:11.443406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:22:15.060 [2024-11-27 04:39:11.443414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:22:15.060 [2024-11-27 04:39:11.443421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:22:15.060 [2024-11-27 04:39:11.443430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:22:15.060 [2024-11-27 04:39:11.443437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:22:15.060 [2024-11-27 04:39:11.443445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:22:15.060 [2024-11-27 04:39:11.443453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:22:15.060 [2024-11-27 04:39:11.443461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:22:15.060 [2024-11-27 04:39:11.443468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:22:15.060 [2024-11-27 04:39:11.443476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:22:15.060 [2024-11-27 04:39:11.443484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:22:15.060 [2024-11-27 04:39:11.443494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:22:15.061 [2024-11-27 04:39:11.443501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:22:15.061 [2024-11-27 04:39:11.443509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:22:15.061 [2024-11-27 04:39:11.443517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:22:15.061 [2024-11-27 04:39:11.443526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:22:15.061 [2024-11-27 04:39:11.443533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:22:15.061 [2024-11-27 04:39:11.443541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:22:15.061 [2024-11-27 04:39:11.443549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:22:15.061 [2024-11-27 04:39:11.443558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:22:15.061 [2024-11-27 04:39:11.443565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:22:15.061 [2024-11-27 04:39:11.443575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:22:15.061 [2024-11-27 04:39:11.443582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:22:15.061 [2024-11-27 04:39:11.443591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:22:15.061 [2024-11-27 04:39:11.443598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:22:15.061 [2024-11-27 04:39:11.443607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:22:15.061 [2024-11-27 04:39:11.443628] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:22:15.061 [2024-11-27 04:39:11.443638] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: b290c8a9-93b3-40e8-884a-c7cbc275854f 00:22:15.061 [2024-11-27 04:39:11.443648] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:22:15.061 [2024-11-27 04:39:11.443656] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:22:15.061 [2024-11-27 04:39:11.443662] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:22:15.061 [2024-11-27 04:39:11.443671] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:22:15.061 [2024-11-27 04:39:11.443678] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:22:15.061 [2024-11-27 04:39:11.443687] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:22:15.061 [2024-11-27 04:39:11.443694] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:22:15.061 [2024-11-27 04:39:11.443702] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:22:15.061 [2024-11-27 04:39:11.443708] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:22:15.061 [2024-11-27 04:39:11.443716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:15.061 [2024-11-27 04:39:11.443732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:22:15.061 [2024-11-27 04:39:11.443742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.935 ms 00:22:15.061 [2024-11-27 04:39:11.443750] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.061 [2024-11-27 04:39:11.455909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:15.061 [2024-11-27 04:39:11.455939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:22:15.061 [2024-11-27 04:39:11.455953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.139 ms 00:22:15.061 [2024-11-27 04:39:11.455961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.061 [2024-11-27 04:39:11.456320] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:15.061 [2024-11-27 04:39:11.456343] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:22:15.061 [2024-11-27 04:39:11.456355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.316 ms 00:22:15.061 [2024-11-27 04:39:11.456363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.061 [2024-11-27 04:39:11.500140] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:15.061 [2024-11-27 04:39:11.500177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:15.061 [2024-11-27 04:39:11.500190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:15.061 [2024-11-27 04:39:11.500199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.061 [2024-11-27 04:39:11.500310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:15.061 [2024-11-27 04:39:11.500326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:15.061 [2024-11-27 04:39:11.500339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:15.061 [2024-11-27 04:39:11.500346] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.061 [2024-11-27 04:39:11.500393] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:15.061 [2024-11-27 04:39:11.500406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:15.061 [2024-11-27 04:39:11.500417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:15.061 [2024-11-27 04:39:11.500424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.061 [2024-11-27 04:39:11.500443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:15.061 [2024-11-27 04:39:11.500451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:15.061 [2024-11-27 04:39:11.500461] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:15.061 [2024-11-27 04:39:11.500470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.061 [2024-11-27 04:39:11.590790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:15.061 [2024-11-27 04:39:11.590843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:15.061 [2024-11-27 04:39:11.590859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:15.061 [2024-11-27 04:39:11.590867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.319 [2024-11-27 04:39:11.653954] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:15.319 [2024-11-27 04:39:11.654003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:15.319 [2024-11-27 04:39:11.654018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:15.319 [2024-11-27 04:39:11.654027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.319 [2024-11-27 04:39:11.654113] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:15.319 [2024-11-27 04:39:11.654123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:15.319 [2024-11-27 04:39:11.654135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:15.319 [2024-11-27 04:39:11.654143] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.319 [2024-11-27 04:39:11.654172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:15.319 [2024-11-27 04:39:11.654179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:15.319 [2024-11-27 04:39:11.654188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:15.319 [2024-11-27 04:39:11.654195] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.319 [2024-11-27 04:39:11.654287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:15.319 [2024-11-27 04:39:11.654297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:15.319 [2024-11-27 04:39:11.654307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:15.319 [2024-11-27 04:39:11.654314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.319 [2024-11-27 04:39:11.654345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:15.319 [2024-11-27 04:39:11.654354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:22:15.319 [2024-11-27 04:39:11.654363] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:15.319 [2024-11-27 04:39:11.654370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.319 [2024-11-27 04:39:11.654409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:15.319 [2024-11-27 04:39:11.654419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:15.319 [2024-11-27 04:39:11.654429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:15.319 [2024-11-27 04:39:11.654436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.319 [2024-11-27 04:39:11.654478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:15.319 [2024-11-27 04:39:11.654487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:15.319 [2024-11-27 04:39:11.654498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:15.319 [2024-11-27 04:39:11.654504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.319 [2024-11-27 04:39:11.654630] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 278.559 ms, result 0 00:22:15.884 04:39:12 ftl.ftl_trim -- ftl/trim.sh@105 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:22:15.884 [2024-11-27 04:39:12.398433] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:22:15.884 [2024-11-27 04:39:12.398553] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76790 ] 00:22:16.143 [2024-11-27 04:39:12.555092] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:16.143 [2024-11-27 04:39:12.665512] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:16.418 [2024-11-27 04:39:12.922435] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:16.418 [2024-11-27 04:39:12.922504] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:16.677 [2024-11-27 04:39:13.076665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:16.677 [2024-11-27 04:39:13.076717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:22:16.677 [2024-11-27 04:39:13.076740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:16.677 [2024-11-27 04:39:13.076749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:16.677 [2024-11-27 04:39:13.079381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:16.677 [2024-11-27 04:39:13.079417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:16.677 [2024-11-27 04:39:13.079431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.614 ms 00:22:16.677 [2024-11-27 04:39:13.079438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:16.677 [2024-11-27 04:39:13.079507] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:22:16.677 [2024-11-27 04:39:13.080180] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:22:16.677 [2024-11-27 04:39:13.080206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:16.677 [2024-11-27 04:39:13.080214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:16.677 [2024-11-27 04:39:13.080223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.706 ms 00:22:16.677 [2024-11-27 04:39:13.080230] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:16.677 [2024-11-27 04:39:13.081423] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:22:16.677 [2024-11-27 04:39:13.093462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:16.677 [2024-11-27 04:39:13.093498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:22:16.677 [2024-11-27 04:39:13.093510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.039 ms 00:22:16.677 [2024-11-27 04:39:13.093519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:16.677 [2024-11-27 04:39:13.093603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:16.677 [2024-11-27 04:39:13.093614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:22:16.677 [2024-11-27 04:39:13.093622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:22:16.677 [2024-11-27 04:39:13.093629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:16.677 [2024-11-27 04:39:13.098292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:16.677 [2024-11-27 04:39:13.098323] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:16.677 [2024-11-27 04:39:13.098332] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.622 ms 00:22:16.677 [2024-11-27 04:39:13.098339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:16.677 [2024-11-27 04:39:13.098421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:16.677 [2024-11-27 04:39:13.098431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:16.677 [2024-11-27 04:39:13.098439] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:22:16.677 [2024-11-27 04:39:13.098446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:16.677 [2024-11-27 04:39:13.098475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:16.677 [2024-11-27 04:39:13.098488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:22:16.678 [2024-11-27 04:39:13.098495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:22:16.678 [2024-11-27 04:39:13.098503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:16.678 [2024-11-27 04:39:13.098522] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:22:16.678 [2024-11-27 04:39:13.101919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:16.678 [2024-11-27 04:39:13.101948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:16.678 [2024-11-27 04:39:13.101957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.401 ms 00:22:16.678 [2024-11-27 04:39:13.101964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:16.678 [2024-11-27 04:39:13.101997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:16.678 [2024-11-27 04:39:13.102005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:22:16.678 [2024-11-27 04:39:13.102013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:22:16.678 [2024-11-27 04:39:13.102020] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:16.678 [2024-11-27 04:39:13.102039] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:22:16.678 [2024-11-27 04:39:13.102055] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:22:16.678 [2024-11-27 04:39:13.102088] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:22:16.678 [2024-11-27 04:39:13.102104] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:22:16.678 [2024-11-27 04:39:13.102204] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:22:16.678 [2024-11-27 04:39:13.102252] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:22:16.678 [2024-11-27 04:39:13.102263] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:22:16.678 [2024-11-27 04:39:13.102275] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:22:16.678 [2024-11-27 04:39:13.102284] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:22:16.678 [2024-11-27 04:39:13.102292] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:22:16.678 [2024-11-27 04:39:13.102299] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:22:16.678 [2024-11-27 04:39:13.102306] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:22:16.678 [2024-11-27 04:39:13.102313] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:22:16.678 [2024-11-27 04:39:13.102321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:16.678 [2024-11-27 04:39:13.102328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:22:16.678 [2024-11-27 04:39:13.102335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.283 ms 00:22:16.678 [2024-11-27 04:39:13.102342] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:16.678 [2024-11-27 04:39:13.102440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:16.678 [2024-11-27 04:39:13.102458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:22:16.678 [2024-11-27 04:39:13.102466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:22:16.678 [2024-11-27 04:39:13.102473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:16.678 [2024-11-27 04:39:13.102577] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:22:16.678 [2024-11-27 04:39:13.102587] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:22:16.678 [2024-11-27 04:39:13.102594] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:16.678 [2024-11-27 04:39:13.102602] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:16.678 [2024-11-27 04:39:13.102614] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:22:16.678 [2024-11-27 04:39:13.102620] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:22:16.678 [2024-11-27 04:39:13.102627] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:22:16.678 [2024-11-27 04:39:13.102634] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:22:16.678 [2024-11-27 04:39:13.102642] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:22:16.678 [2024-11-27 04:39:13.102648] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:16.678 [2024-11-27 04:39:13.102655] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:22:16.678 [2024-11-27 04:39:13.102666] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:22:16.678 [2024-11-27 04:39:13.102672] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:16.678 [2024-11-27 04:39:13.102679] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:22:16.678 [2024-11-27 04:39:13.102686] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:22:16.678 [2024-11-27 04:39:13.102692] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:16.678 [2024-11-27 04:39:13.102699] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:22:16.678 [2024-11-27 04:39:13.102705] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:22:16.678 [2024-11-27 04:39:13.102712] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:16.678 [2024-11-27 04:39:13.102718] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:22:16.678 [2024-11-27 04:39:13.102737] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:22:16.678 [2024-11-27 04:39:13.102745] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:16.678 [2024-11-27 04:39:13.102752] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:22:16.678 [2024-11-27 04:39:13.102759] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:22:16.678 [2024-11-27 04:39:13.102766] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:16.678 [2024-11-27 04:39:13.102773] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:22:16.678 [2024-11-27 04:39:13.102779] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:22:16.678 [2024-11-27 04:39:13.102786] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:16.678 [2024-11-27 04:39:13.102792] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:22:16.678 [2024-11-27 04:39:13.102799] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:22:16.678 [2024-11-27 04:39:13.102805] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:16.678 [2024-11-27 04:39:13.102812] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:22:16.678 [2024-11-27 04:39:13.102818] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:22:16.678 [2024-11-27 04:39:13.102825] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:16.678 [2024-11-27 04:39:13.102831] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:22:16.678 [2024-11-27 04:39:13.102837] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:22:16.678 [2024-11-27 04:39:13.102844] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:16.678 [2024-11-27 04:39:13.102850] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:22:16.678 [2024-11-27 04:39:13.102857] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:22:16.678 [2024-11-27 04:39:13.102863] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:16.678 [2024-11-27 04:39:13.102870] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:22:16.678 [2024-11-27 04:39:13.102876] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:22:16.678 [2024-11-27 04:39:13.102882] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:16.678 [2024-11-27 04:39:13.102888] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:22:16.678 [2024-11-27 04:39:13.102896] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:22:16.678 [2024-11-27 04:39:13.102905] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:16.678 [2024-11-27 04:39:13.102912] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:16.678 [2024-11-27 04:39:13.102919] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:22:16.678 [2024-11-27 04:39:13.102925] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:22:16.678 [2024-11-27 04:39:13.102931] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:22:16.678 [2024-11-27 04:39:13.102938] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:22:16.679 [2024-11-27 04:39:13.102944] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:22:16.679 [2024-11-27 04:39:13.102950] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:22:16.679 [2024-11-27 04:39:13.102960] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:22:16.679 [2024-11-27 04:39:13.102969] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:16.679 [2024-11-27 04:39:13.102977] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:22:16.679 [2024-11-27 04:39:13.102984] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:22:16.679 [2024-11-27 04:39:13.102991] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:22:16.679 [2024-11-27 04:39:13.102998] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:22:16.679 [2024-11-27 04:39:13.103006] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:22:16.679 [2024-11-27 04:39:13.103013] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:22:16.679 [2024-11-27 04:39:13.103020] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:22:16.679 [2024-11-27 04:39:13.103027] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:22:16.679 [2024-11-27 04:39:13.103033] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:22:16.679 [2024-11-27 04:39:13.103040] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:22:16.679 [2024-11-27 04:39:13.103047] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:22:16.679 [2024-11-27 04:39:13.103054] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:22:16.679 [2024-11-27 04:39:13.103061] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:22:16.679 [2024-11-27 04:39:13.103069] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:22:16.679 [2024-11-27 04:39:13.103076] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:22:16.679 [2024-11-27 04:39:13.103084] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:16.679 [2024-11-27 04:39:13.103092] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:16.679 [2024-11-27 04:39:13.103099] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:22:16.679 [2024-11-27 04:39:13.103106] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:22:16.679 [2024-11-27 04:39:13.103113] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:22:16.679 [2024-11-27 04:39:13.103121] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:16.679 [2024-11-27 04:39:13.103130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:22:16.679 [2024-11-27 04:39:13.103138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.614 ms 00:22:16.679 [2024-11-27 04:39:13.103144] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:16.679 [2024-11-27 04:39:13.128456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:16.679 [2024-11-27 04:39:13.128494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:16.679 [2024-11-27 04:39:13.128504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.261 ms 00:22:16.679 [2024-11-27 04:39:13.128512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:16.679 [2024-11-27 04:39:13.128629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:16.679 [2024-11-27 04:39:13.128640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:22:16.679 [2024-11-27 04:39:13.128648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:22:16.679 [2024-11-27 04:39:13.128655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:16.679 [2024-11-27 04:39:13.174308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:16.679 [2024-11-27 04:39:13.174354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:16.679 [2024-11-27 04:39:13.174369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.632 ms 00:22:16.679 [2024-11-27 04:39:13.174378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:16.679 [2024-11-27 04:39:13.174482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:16.679 [2024-11-27 04:39:13.174494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:16.679 [2024-11-27 04:39:13.174502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:22:16.679 [2024-11-27 04:39:13.174509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:16.679 [2024-11-27 04:39:13.174840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:16.679 [2024-11-27 04:39:13.174863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:16.679 [2024-11-27 04:39:13.174879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.309 ms 00:22:16.679 [2024-11-27 04:39:13.174886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:16.679 [2024-11-27 04:39:13.175013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:16.679 [2024-11-27 04:39:13.175023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:16.679 [2024-11-27 04:39:13.175031] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.103 ms 00:22:16.679 [2024-11-27 04:39:13.175038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:16.679 [2024-11-27 04:39:13.188176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:16.679 [2024-11-27 04:39:13.188206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:16.679 [2024-11-27 04:39:13.188216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.119 ms 00:22:16.679 [2024-11-27 04:39:13.188223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:16.679 [2024-11-27 04:39:13.200441] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:22:16.679 [2024-11-27 04:39:13.200473] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:22:16.679 [2024-11-27 04:39:13.200485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:16.679 [2024-11-27 04:39:13.200493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:22:16.679 [2024-11-27 04:39:13.200502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.163 ms 00:22:16.679 [2024-11-27 04:39:13.200508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:16.679 [2024-11-27 04:39:13.224440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:16.679 [2024-11-27 04:39:13.224474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:22:16.679 [2024-11-27 04:39:13.224485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.859 ms 00:22:16.679 [2024-11-27 04:39:13.224494] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:16.679 [2024-11-27 04:39:13.235906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:16.679 [2024-11-27 04:39:13.235933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:22:16.679 [2024-11-27 04:39:13.235942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.235 ms 00:22:16.679 [2024-11-27 04:39:13.235949] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:16.679 [2024-11-27 04:39:13.246895] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:16.679 [2024-11-27 04:39:13.246922] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:22:16.679 [2024-11-27 04:39:13.246932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.885 ms 00:22:16.679 [2024-11-27 04:39:13.246940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:16.679 [2024-11-27 04:39:13.247542] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:16.680 [2024-11-27 04:39:13.247565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:22:16.680 [2024-11-27 04:39:13.247575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.518 ms 00:22:16.680 [2024-11-27 04:39:13.247583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:16.938 [2024-11-27 04:39:13.301537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:16.938 [2024-11-27 04:39:13.301583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:22:16.938 [2024-11-27 04:39:13.301595] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 53.931 ms 00:22:16.938 [2024-11-27 04:39:13.301603] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:16.938 [2024-11-27 04:39:13.311830] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:22:16.938 [2024-11-27 04:39:13.325592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:16.938 [2024-11-27 04:39:13.325632] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:22:16.938 [2024-11-27 04:39:13.325645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.891 ms 00:22:16.938 [2024-11-27 04:39:13.325658] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:16.938 [2024-11-27 04:39:13.325757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:16.938 [2024-11-27 04:39:13.325768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:22:16.938 [2024-11-27 04:39:13.325778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:22:16.938 [2024-11-27 04:39:13.325785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:16.938 [2024-11-27 04:39:13.325832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:16.938 [2024-11-27 04:39:13.325842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:22:16.938 [2024-11-27 04:39:13.325850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:22:16.938 [2024-11-27 04:39:13.325860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:16.938 [2024-11-27 04:39:13.325887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:16.938 [2024-11-27 04:39:13.325895] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:22:16.938 [2024-11-27 04:39:13.325903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:22:16.938 [2024-11-27 04:39:13.325910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:16.938 [2024-11-27 04:39:13.325943] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:22:16.938 [2024-11-27 04:39:13.325952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:16.938 [2024-11-27 04:39:13.325959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:22:16.938 [2024-11-27 04:39:13.325966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:22:16.938 [2024-11-27 04:39:13.325973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:16.938 [2024-11-27 04:39:13.349097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:16.938 [2024-11-27 04:39:13.349136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:22:16.938 [2024-11-27 04:39:13.349147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.105 ms 00:22:16.938 [2024-11-27 04:39:13.349156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:16.938 [2024-11-27 04:39:13.349242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:16.938 [2024-11-27 04:39:13.349252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:22:16.938 [2024-11-27 04:39:13.349260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:22:16.938 [2024-11-27 04:39:13.349268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:16.938 [2024-11-27 04:39:13.350142] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:16.938 [2024-11-27 04:39:13.353004] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 273.206 ms, result 0 00:22:16.938 [2024-11-27 04:39:13.353637] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:16.938 [2024-11-27 04:39:13.366522] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:17.870  [2024-11-27T04:39:15.831Z] Copying: 46/256 [MB] (46 MBps) [2024-11-27T04:39:16.766Z] Copying: 91/256 [MB] (45 MBps) [2024-11-27T04:39:17.703Z] Copying: 134/256 [MB] (42 MBps) [2024-11-27T04:39:18.637Z] Copying: 174/256 [MB] (40 MBps) [2024-11-27T04:39:19.573Z] Copying: 216/256 [MB] (41 MBps) [2024-11-27T04:39:19.573Z] Copying: 249/256 [MB] (33 MBps) [2024-11-27T04:39:20.139Z] Copying: 256/256 [MB] (average 41 MBps)[2024-11-27 04:39:19.975367] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:23.552 [2024-11-27 04:39:19.984443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:23.552 [2024-11-27 04:39:19.984482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:22:23.552 [2024-11-27 04:39:19.984499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:23.552 [2024-11-27 04:39:19.984507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:23.552 [2024-11-27 04:39:19.984530] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:22:23.552 [2024-11-27 04:39:19.987135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:23.552 [2024-11-27 04:39:19.987165] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:22:23.552 [2024-11-27 04:39:19.987175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.590 ms 00:22:23.552 [2024-11-27 04:39:19.987184] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:23.552 [2024-11-27 04:39:19.987470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:23.552 [2024-11-27 04:39:19.987486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:22:23.552 [2024-11-27 04:39:19.987495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.243 ms 00:22:23.552 [2024-11-27 04:39:19.987502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:23.552 [2024-11-27 04:39:19.991533] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:23.552 [2024-11-27 04:39:19.991556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:22:23.552 [2024-11-27 04:39:19.991565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.012 ms 00:22:23.552 [2024-11-27 04:39:19.991574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:23.552 [2024-11-27 04:39:19.998443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:23.552 [2024-11-27 04:39:19.998471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:22:23.552 [2024-11-27 04:39:19.998481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.852 ms 00:22:23.552 [2024-11-27 04:39:19.998490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:23.552 [2024-11-27 04:39:20.021943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:23.552 [2024-11-27 04:39:20.021988] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:22:23.552 [2024-11-27 04:39:20.022000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.396 ms 00:22:23.552 [2024-11-27 04:39:20.022008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:23.552 [2024-11-27 04:39:20.035533] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:23.552 [2024-11-27 04:39:20.035574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:22:23.552 [2024-11-27 04:39:20.035592] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.496 ms 00:22:23.552 [2024-11-27 04:39:20.035600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:23.552 [2024-11-27 04:39:20.035754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:23.552 [2024-11-27 04:39:20.035765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:22:23.552 [2024-11-27 04:39:20.035782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.101 ms 00:22:23.553 [2024-11-27 04:39:20.035790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:23.553 [2024-11-27 04:39:20.058682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:23.553 [2024-11-27 04:39:20.058718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:22:23.553 [2024-11-27 04:39:20.058737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.874 ms 00:22:23.553 [2024-11-27 04:39:20.058745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:23.553 [2024-11-27 04:39:20.080967] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:23.553 [2024-11-27 04:39:20.081011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:22:23.553 [2024-11-27 04:39:20.081022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.195 ms 00:22:23.553 [2024-11-27 04:39:20.081030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:23.553 [2024-11-27 04:39:20.104846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:23.553 [2024-11-27 04:39:20.104914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:22:23.553 [2024-11-27 04:39:20.104927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.767 ms 00:22:23.553 [2024-11-27 04:39:20.104935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:23.553 [2024-11-27 04:39:20.127995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:23.553 [2024-11-27 04:39:20.128038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:22:23.553 [2024-11-27 04:39:20.128051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.998 ms 00:22:23.553 [2024-11-27 04:39:20.128059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:23.553 [2024-11-27 04:39:20.128086] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:22:23.553 [2024-11-27 04:39:20.128100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:22:23.553 [2024-11-27 04:39:20.128110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:22:23.553 [2024-11-27 04:39:20.128118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:22:23.553 [2024-11-27 04:39:20.128126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:23.553 [2024-11-27 04:39:20.128134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:23.553 [2024-11-27 04:39:20.128141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:23.553 [2024-11-27 04:39:20.128148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:23.553 [2024-11-27 04:39:20.128156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:23.553 [2024-11-27 04:39:20.128163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:23.553 [2024-11-27 04:39:20.128171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:23.553 [2024-11-27 04:39:20.128178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:23.553 [2024-11-27 04:39:20.128185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:23.553 [2024-11-27 04:39:20.128193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:23.553 [2024-11-27 04:39:20.128200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:23.553 [2024-11-27 04:39:20.128208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:23.553 [2024-11-27 04:39:20.128215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:23.553 [2024-11-27 04:39:20.128222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:23.553 [2024-11-27 04:39:20.128230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:23.553 [2024-11-27 04:39:20.128237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:22:23.553 [2024-11-27 04:39:20.128245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:22:23.553 [2024-11-27 04:39:20.128252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:22:23.553 [2024-11-27 04:39:20.128260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:22:23.553 [2024-11-27 04:39:20.128267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:22:23.553 [2024-11-27 04:39:20.128274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:22:23.553 [2024-11-27 04:39:20.128281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:22:23.553 [2024-11-27 04:39:20.128288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:22:23.553 [2024-11-27 04:39:20.128295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:22:23.553 [2024-11-27 04:39:20.128302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:22:23.553 [2024-11-27 04:39:20.128310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:22:23.553 [2024-11-27 04:39:20.128317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:22:23.553 [2024-11-27 04:39:20.128326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:22:23.553 [2024-11-27 04:39:20.128334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:22:23.553 [2024-11-27 04:39:20.128341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:22:23.553 [2024-11-27 04:39:20.128348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:22:23.553 [2024-11-27 04:39:20.128355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:22:23.553 [2024-11-27 04:39:20.128362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:22:23.553 [2024-11-27 04:39:20.128370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:22:23.553 [2024-11-27 04:39:20.128377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:22:23.553 [2024-11-27 04:39:20.128385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:22:23.553 [2024-11-27 04:39:20.128392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:22:23.553 [2024-11-27 04:39:20.128399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:22:23.553 [2024-11-27 04:39:20.128407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:22:23.553 [2024-11-27 04:39:20.128414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:22:23.553 [2024-11-27 04:39:20.128421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:22:23.553 [2024-11-27 04:39:20.128428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:22:23.553 [2024-11-27 04:39:20.128435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:22:23.553 [2024-11-27 04:39:20.128443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:22:23.553 [2024-11-27 04:39:20.128450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:22:23.553 [2024-11-27 04:39:20.128457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:22:23.553 [2024-11-27 04:39:20.128464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:22:23.553 [2024-11-27 04:39:20.128471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:22:23.553 [2024-11-27 04:39:20.128479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:22:23.553 [2024-11-27 04:39:20.128486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:22:23.553 [2024-11-27 04:39:20.128493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:22:23.553 [2024-11-27 04:39:20.128500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:22:23.554 [2024-11-27 04:39:20.128507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:22:23.554 [2024-11-27 04:39:20.128514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:22:23.554 [2024-11-27 04:39:20.128521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:22:23.554 [2024-11-27 04:39:20.128529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:22:23.554 [2024-11-27 04:39:20.128536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:22:23.554 [2024-11-27 04:39:20.128543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:22:23.554 [2024-11-27 04:39:20.128550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:22:23.554 [2024-11-27 04:39:20.128558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:22:23.554 [2024-11-27 04:39:20.128565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:22:23.554 [2024-11-27 04:39:20.128573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:22:23.554 [2024-11-27 04:39:20.128580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:22:23.554 [2024-11-27 04:39:20.128587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:22:23.554 [2024-11-27 04:39:20.128594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:22:23.554 [2024-11-27 04:39:20.128601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:22:23.554 [2024-11-27 04:39:20.128608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:22:23.554 [2024-11-27 04:39:20.128616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:22:23.554 [2024-11-27 04:39:20.128623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:22:23.554 [2024-11-27 04:39:20.128631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:22:23.554 [2024-11-27 04:39:20.128638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:22:23.554 [2024-11-27 04:39:20.128646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:22:23.554 [2024-11-27 04:39:20.128653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:22:23.554 [2024-11-27 04:39:20.128660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:22:23.554 [2024-11-27 04:39:20.128668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:22:23.554 [2024-11-27 04:39:20.128675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:22:23.554 [2024-11-27 04:39:20.128682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:22:23.554 [2024-11-27 04:39:20.128689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:22:23.554 [2024-11-27 04:39:20.128696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:22:23.554 [2024-11-27 04:39:20.128703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:22:23.554 [2024-11-27 04:39:20.128710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:22:23.554 [2024-11-27 04:39:20.128718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:22:23.554 [2024-11-27 04:39:20.128734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:22:23.554 [2024-11-27 04:39:20.128742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:22:23.554 [2024-11-27 04:39:20.128750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:22:23.554 [2024-11-27 04:39:20.128757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:22:23.554 [2024-11-27 04:39:20.128764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:22:23.554 [2024-11-27 04:39:20.128772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:22:23.554 [2024-11-27 04:39:20.128779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:22:23.554 [2024-11-27 04:39:20.128787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:22:23.554 [2024-11-27 04:39:20.128803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:22:23.554 [2024-11-27 04:39:20.128815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:22:23.554 [2024-11-27 04:39:20.128823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:22:23.554 [2024-11-27 04:39:20.128830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:22:23.554 [2024-11-27 04:39:20.128838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:22:23.554 [2024-11-27 04:39:20.128845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:22:23.554 [2024-11-27 04:39:20.128869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:22:23.554 [2024-11-27 04:39:20.128885] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:22:23.554 [2024-11-27 04:39:20.128892] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: b290c8a9-93b3-40e8-884a-c7cbc275854f 00:22:23.554 [2024-11-27 04:39:20.128901] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:22:23.554 [2024-11-27 04:39:20.128908] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:22:23.554 [2024-11-27 04:39:20.128915] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:22:23.554 [2024-11-27 04:39:20.128922] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:22:23.554 [2024-11-27 04:39:20.128929] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:22:23.554 [2024-11-27 04:39:20.128937] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:22:23.554 [2024-11-27 04:39:20.128950] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:22:23.554 [2024-11-27 04:39:20.129594] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:22:23.554 [2024-11-27 04:39:20.129610] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:22:23.554 [2024-11-27 04:39:20.129620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:23.554 [2024-11-27 04:39:20.129632] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:22:23.554 [2024-11-27 04:39:20.129644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.535 ms 00:22:23.554 [2024-11-27 04:39:20.129654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:23.813 [2024-11-27 04:39:20.144691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:23.813 [2024-11-27 04:39:20.144745] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:22:23.813 [2024-11-27 04:39:20.144758] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.992 ms 00:22:23.813 [2024-11-27 04:39:20.144766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:23.813 [2024-11-27 04:39:20.145141] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:23.813 [2024-11-27 04:39:20.145246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:22:23.813 [2024-11-27 04:39:20.145260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.328 ms 00:22:23.813 [2024-11-27 04:39:20.145268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:23.813 [2024-11-27 04:39:20.183259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:23.813 [2024-11-27 04:39:20.183302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:23.813 [2024-11-27 04:39:20.183313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:23.813 [2024-11-27 04:39:20.183325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:23.813 [2024-11-27 04:39:20.183434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:23.813 [2024-11-27 04:39:20.183443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:23.813 [2024-11-27 04:39:20.183450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:23.813 [2024-11-27 04:39:20.183457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:23.813 [2024-11-27 04:39:20.183505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:23.813 [2024-11-27 04:39:20.183514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:23.813 [2024-11-27 04:39:20.183521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:23.813 [2024-11-27 04:39:20.183528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:23.813 [2024-11-27 04:39:20.183547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:23.813 [2024-11-27 04:39:20.183555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:23.813 [2024-11-27 04:39:20.183562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:23.813 [2024-11-27 04:39:20.183569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:23.813 [2024-11-27 04:39:20.260181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:23.813 [2024-11-27 04:39:20.260241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:23.813 [2024-11-27 04:39:20.260252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:23.813 [2024-11-27 04:39:20.260260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:23.813 [2024-11-27 04:39:20.322290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:23.813 [2024-11-27 04:39:20.322340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:23.813 [2024-11-27 04:39:20.322350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:23.813 [2024-11-27 04:39:20.322358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:23.813 [2024-11-27 04:39:20.322432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:23.813 [2024-11-27 04:39:20.322441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:23.813 [2024-11-27 04:39:20.322449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:23.813 [2024-11-27 04:39:20.322456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:23.813 [2024-11-27 04:39:20.322485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:23.813 [2024-11-27 04:39:20.322495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:23.813 [2024-11-27 04:39:20.322503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:23.813 [2024-11-27 04:39:20.322510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:23.813 [2024-11-27 04:39:20.322594] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:23.813 [2024-11-27 04:39:20.322604] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:23.813 [2024-11-27 04:39:20.322611] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:23.813 [2024-11-27 04:39:20.322618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:23.813 [2024-11-27 04:39:20.322649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:23.813 [2024-11-27 04:39:20.322657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:22:23.813 [2024-11-27 04:39:20.322667] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:23.813 [2024-11-27 04:39:20.322674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:23.813 [2024-11-27 04:39:20.322710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:23.813 [2024-11-27 04:39:20.322718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:23.813 [2024-11-27 04:39:20.322751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:23.813 [2024-11-27 04:39:20.322759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:23.813 [2024-11-27 04:39:20.322813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:23.813 [2024-11-27 04:39:20.322826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:23.813 [2024-11-27 04:39:20.322834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:23.813 [2024-11-27 04:39:20.322841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:23.813 [2024-11-27 04:39:20.322971] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 338.507 ms, result 0 00:22:24.464 00:22:24.464 00:22:24.464 04:39:21 ftl.ftl_trim -- ftl/trim.sh@106 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:22:25.029 /home/vagrant/spdk_repo/spdk/test/ftl/data: OK 00:22:25.029 04:39:21 ftl.ftl_trim -- ftl/trim.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:22:25.029 04:39:21 ftl.ftl_trim -- ftl/trim.sh@109 -- # fio_kill 00:22:25.029 04:39:21 ftl.ftl_trim -- ftl/trim.sh@15 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:22:25.029 04:39:21 ftl.ftl_trim -- ftl/trim.sh@16 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:22:25.029 04:39:21 ftl.ftl_trim -- ftl/trim.sh@17 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/random_pattern 00:22:25.029 04:39:21 ftl.ftl_trim -- ftl/trim.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/data 00:22:25.287 Process with pid 76739 is not found 00:22:25.287 04:39:21 ftl.ftl_trim -- ftl/trim.sh@20 -- # killprocess 76739 00:22:25.287 04:39:21 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 76739 ']' 00:22:25.287 04:39:21 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 76739 00:22:25.287 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (76739) - No such process 00:22:25.287 04:39:21 ftl.ftl_trim -- common/autotest_common.sh@981 -- # echo 'Process with pid 76739 is not found' 00:22:25.287 00:22:25.287 real 0m54.828s 00:22:25.287 user 1m28.883s 00:22:25.287 sys 0m5.245s 00:22:25.287 04:39:21 ftl.ftl_trim -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:25.287 04:39:21 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:22:25.287 ************************************ 00:22:25.287 END TEST ftl_trim 00:22:25.287 ************************************ 00:22:25.287 04:39:21 ftl -- ftl/ftl.sh@76 -- # run_test ftl_restore /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:22:25.287 04:39:21 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:22:25.287 04:39:21 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:25.287 04:39:21 ftl -- common/autotest_common.sh@10 -- # set +x 00:22:25.287 ************************************ 00:22:25.287 START TEST ftl_restore 00:22:25.287 ************************************ 00:22:25.287 04:39:21 ftl.ftl_restore -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:22:25.287 * Looking for test storage... 00:22:25.287 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:22:25.287 04:39:21 ftl.ftl_restore -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:25.287 04:39:21 ftl.ftl_restore -- common/autotest_common.sh@1693 -- # lcov --version 00:22:25.287 04:39:21 ftl.ftl_restore -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:25.287 04:39:21 ftl.ftl_restore -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:25.287 04:39:21 ftl.ftl_restore -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:25.287 04:39:21 ftl.ftl_restore -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:25.287 04:39:21 ftl.ftl_restore -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:25.287 04:39:21 ftl.ftl_restore -- scripts/common.sh@336 -- # IFS=.-: 00:22:25.287 04:39:21 ftl.ftl_restore -- scripts/common.sh@336 -- # read -ra ver1 00:22:25.287 04:39:21 ftl.ftl_restore -- scripts/common.sh@337 -- # IFS=.-: 00:22:25.287 04:39:21 ftl.ftl_restore -- scripts/common.sh@337 -- # read -ra ver2 00:22:25.287 04:39:21 ftl.ftl_restore -- scripts/common.sh@338 -- # local 'op=<' 00:22:25.287 04:39:21 ftl.ftl_restore -- scripts/common.sh@340 -- # ver1_l=2 00:22:25.287 04:39:21 ftl.ftl_restore -- scripts/common.sh@341 -- # ver2_l=1 00:22:25.287 04:39:21 ftl.ftl_restore -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:25.287 04:39:21 ftl.ftl_restore -- scripts/common.sh@344 -- # case "$op" in 00:22:25.287 04:39:21 ftl.ftl_restore -- scripts/common.sh@345 -- # : 1 00:22:25.287 04:39:21 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:25.287 04:39:21 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:25.287 04:39:21 ftl.ftl_restore -- scripts/common.sh@365 -- # decimal 1 00:22:25.287 04:39:21 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=1 00:22:25.287 04:39:21 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:25.287 04:39:21 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 1 00:22:25.287 04:39:21 ftl.ftl_restore -- scripts/common.sh@365 -- # ver1[v]=1 00:22:25.287 04:39:21 ftl.ftl_restore -- scripts/common.sh@366 -- # decimal 2 00:22:25.287 04:39:21 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=2 00:22:25.287 04:39:21 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:25.287 04:39:21 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 2 00:22:25.287 04:39:21 ftl.ftl_restore -- scripts/common.sh@366 -- # ver2[v]=2 00:22:25.287 04:39:21 ftl.ftl_restore -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:25.287 04:39:21 ftl.ftl_restore -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:25.287 04:39:21 ftl.ftl_restore -- scripts/common.sh@368 -- # return 0 00:22:25.287 04:39:21 ftl.ftl_restore -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:25.287 04:39:21 ftl.ftl_restore -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:25.287 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:25.287 --rc genhtml_branch_coverage=1 00:22:25.287 --rc genhtml_function_coverage=1 00:22:25.287 --rc genhtml_legend=1 00:22:25.287 --rc geninfo_all_blocks=1 00:22:25.287 --rc geninfo_unexecuted_blocks=1 00:22:25.287 00:22:25.287 ' 00:22:25.287 04:39:21 ftl.ftl_restore -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:25.287 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:25.287 --rc genhtml_branch_coverage=1 00:22:25.287 --rc genhtml_function_coverage=1 00:22:25.287 --rc genhtml_legend=1 00:22:25.287 --rc geninfo_all_blocks=1 00:22:25.287 --rc geninfo_unexecuted_blocks=1 00:22:25.287 00:22:25.287 ' 00:22:25.287 04:39:21 ftl.ftl_restore -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:25.287 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:25.287 --rc genhtml_branch_coverage=1 00:22:25.287 --rc genhtml_function_coverage=1 00:22:25.287 --rc genhtml_legend=1 00:22:25.287 --rc geninfo_all_blocks=1 00:22:25.287 --rc geninfo_unexecuted_blocks=1 00:22:25.287 00:22:25.287 ' 00:22:25.287 04:39:21 ftl.ftl_restore -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:25.288 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:25.288 --rc genhtml_branch_coverage=1 00:22:25.288 --rc genhtml_function_coverage=1 00:22:25.288 --rc genhtml_legend=1 00:22:25.288 --rc geninfo_all_blocks=1 00:22:25.288 --rc geninfo_unexecuted_blocks=1 00:22:25.288 00:22:25.288 ' 00:22:25.288 04:39:21 ftl.ftl_restore -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:22:25.288 04:39:21 ftl.ftl_restore -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:22:25.288 04:39:21 ftl.ftl_restore -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:22:25.288 04:39:21 ftl.ftl_restore -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:22:25.288 04:39:21 ftl.ftl_restore -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:22:25.288 04:39:21 ftl.ftl_restore -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:22:25.288 04:39:21 ftl.ftl_restore -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:25.288 04:39:21 ftl.ftl_restore -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:22:25.288 04:39:21 ftl.ftl_restore -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:22:25.288 04:39:21 ftl.ftl_restore -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:25.288 04:39:21 ftl.ftl_restore -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:25.288 04:39:21 ftl.ftl_restore -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:22:25.288 04:39:21 ftl.ftl_restore -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:22:25.288 04:39:21 ftl.ftl_restore -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:22:25.288 04:39:21 ftl.ftl_restore -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:22:25.288 04:39:21 ftl.ftl_restore -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:22:25.288 04:39:21 ftl.ftl_restore -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:22:25.288 04:39:21 ftl.ftl_restore -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:25.288 04:39:21 ftl.ftl_restore -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:25.288 04:39:21 ftl.ftl_restore -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:22:25.288 04:39:21 ftl.ftl_restore -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:22:25.288 04:39:21 ftl.ftl_restore -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:22:25.288 04:39:21 ftl.ftl_restore -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:22:25.288 04:39:21 ftl.ftl_restore -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:22:25.288 04:39:21 ftl.ftl_restore -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:22:25.288 04:39:21 ftl.ftl_restore -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:22:25.288 04:39:21 ftl.ftl_restore -- ftl/common.sh@23 -- # spdk_ini_pid= 00:22:25.288 04:39:21 ftl.ftl_restore -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:25.288 04:39:21 ftl.ftl_restore -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:25.288 04:39:21 ftl.ftl_restore -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:25.288 04:39:21 ftl.ftl_restore -- ftl/restore.sh@13 -- # mktemp -d 00:22:25.288 04:39:21 ftl.ftl_restore -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.Mctx32uHDY 00:22:25.288 04:39:21 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:22:25.288 04:39:21 ftl.ftl_restore -- ftl/restore.sh@16 -- # case $opt in 00:22:25.288 04:39:21 ftl.ftl_restore -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:22:25.288 04:39:21 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:22:25.288 04:39:21 ftl.ftl_restore -- ftl/restore.sh@23 -- # shift 2 00:22:25.288 04:39:21 ftl.ftl_restore -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:22:25.288 04:39:21 ftl.ftl_restore -- ftl/restore.sh@25 -- # timeout=240 00:22:25.288 04:39:21 ftl.ftl_restore -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:22:25.288 04:39:21 ftl.ftl_restore -- ftl/restore.sh@39 -- # svcpid=76968 00:22:25.288 04:39:21 ftl.ftl_restore -- ftl/restore.sh@41 -- # waitforlisten 76968 00:22:25.288 04:39:21 ftl.ftl_restore -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:25.288 04:39:21 ftl.ftl_restore -- common/autotest_common.sh@835 -- # '[' -z 76968 ']' 00:22:25.288 04:39:21 ftl.ftl_restore -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:25.288 04:39:21 ftl.ftl_restore -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:25.288 04:39:21 ftl.ftl_restore -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:25.288 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:25.288 04:39:21 ftl.ftl_restore -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:25.288 04:39:21 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:22:25.545 [2024-11-27 04:39:21.912640] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:22:25.545 [2024-11-27 04:39:21.912943] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76968 ] 00:22:25.545 [2024-11-27 04:39:22.074971] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:25.804 [2024-11-27 04:39:22.177754] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:26.370 04:39:22 ftl.ftl_restore -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:26.370 04:39:22 ftl.ftl_restore -- common/autotest_common.sh@868 -- # return 0 00:22:26.370 04:39:22 ftl.ftl_restore -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:22:26.370 04:39:22 ftl.ftl_restore -- ftl/common.sh@54 -- # local name=nvme0 00:22:26.370 04:39:22 ftl.ftl_restore -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:22:26.370 04:39:22 ftl.ftl_restore -- ftl/common.sh@56 -- # local size=103424 00:22:26.370 04:39:22 ftl.ftl_restore -- ftl/common.sh@59 -- # local base_bdev 00:22:26.370 04:39:22 ftl.ftl_restore -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:22:26.628 04:39:23 ftl.ftl_restore -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:22:26.628 04:39:23 ftl.ftl_restore -- ftl/common.sh@62 -- # local base_size 00:22:26.628 04:39:23 ftl.ftl_restore -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:22:26.628 04:39:23 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:22:26.628 04:39:23 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:22:26.628 04:39:23 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:22:26.628 04:39:23 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:22:26.628 04:39:23 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:22:26.886 04:39:23 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:22:26.886 { 00:22:26.886 "name": "nvme0n1", 00:22:26.886 "aliases": [ 00:22:26.886 "c1c3db1e-ef41-4fcd-a92b-9a3895bb83ff" 00:22:26.886 ], 00:22:26.886 "product_name": "NVMe disk", 00:22:26.886 "block_size": 4096, 00:22:26.886 "num_blocks": 1310720, 00:22:26.886 "uuid": "c1c3db1e-ef41-4fcd-a92b-9a3895bb83ff", 00:22:26.886 "numa_id": -1, 00:22:26.886 "assigned_rate_limits": { 00:22:26.886 "rw_ios_per_sec": 0, 00:22:26.886 "rw_mbytes_per_sec": 0, 00:22:26.886 "r_mbytes_per_sec": 0, 00:22:26.886 "w_mbytes_per_sec": 0 00:22:26.886 }, 00:22:26.886 "claimed": true, 00:22:26.886 "claim_type": "read_many_write_one", 00:22:26.886 "zoned": false, 00:22:26.886 "supported_io_types": { 00:22:26.886 "read": true, 00:22:26.886 "write": true, 00:22:26.886 "unmap": true, 00:22:26.886 "flush": true, 00:22:26.886 "reset": true, 00:22:26.886 "nvme_admin": true, 00:22:26.886 "nvme_io": true, 00:22:26.886 "nvme_io_md": false, 00:22:26.886 "write_zeroes": true, 00:22:26.886 "zcopy": false, 00:22:26.886 "get_zone_info": false, 00:22:26.886 "zone_management": false, 00:22:26.886 "zone_append": false, 00:22:26.886 "compare": true, 00:22:26.886 "compare_and_write": false, 00:22:26.886 "abort": true, 00:22:26.886 "seek_hole": false, 00:22:26.886 "seek_data": false, 00:22:26.886 "copy": true, 00:22:26.886 "nvme_iov_md": false 00:22:26.886 }, 00:22:26.886 "driver_specific": { 00:22:26.886 "nvme": [ 00:22:26.886 { 00:22:26.886 "pci_address": "0000:00:11.0", 00:22:26.886 "trid": { 00:22:26.886 "trtype": "PCIe", 00:22:26.886 "traddr": "0000:00:11.0" 00:22:26.886 }, 00:22:26.886 "ctrlr_data": { 00:22:26.886 "cntlid": 0, 00:22:26.886 "vendor_id": "0x1b36", 00:22:26.886 "model_number": "QEMU NVMe Ctrl", 00:22:26.886 "serial_number": "12341", 00:22:26.886 "firmware_revision": "8.0.0", 00:22:26.886 "subnqn": "nqn.2019-08.org.qemu:12341", 00:22:26.886 "oacs": { 00:22:26.886 "security": 0, 00:22:26.886 "format": 1, 00:22:26.886 "firmware": 0, 00:22:26.886 "ns_manage": 1 00:22:26.886 }, 00:22:26.886 "multi_ctrlr": false, 00:22:26.886 "ana_reporting": false 00:22:26.886 }, 00:22:26.886 "vs": { 00:22:26.886 "nvme_version": "1.4" 00:22:26.886 }, 00:22:26.886 "ns_data": { 00:22:26.886 "id": 1, 00:22:26.886 "can_share": false 00:22:26.886 } 00:22:26.886 } 00:22:26.886 ], 00:22:26.886 "mp_policy": "active_passive" 00:22:26.886 } 00:22:26.886 } 00:22:26.886 ]' 00:22:26.886 04:39:23 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:22:26.886 04:39:23 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:22:26.886 04:39:23 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:22:26.886 04:39:23 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=1310720 00:22:26.886 04:39:23 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:22:26.886 04:39:23 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 5120 00:22:26.886 04:39:23 ftl.ftl_restore -- ftl/common.sh@63 -- # base_size=5120 00:22:26.886 04:39:23 ftl.ftl_restore -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:22:26.886 04:39:23 ftl.ftl_restore -- ftl/common.sh@67 -- # clear_lvols 00:22:26.886 04:39:23 ftl.ftl_restore -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:22:26.886 04:39:23 ftl.ftl_restore -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:22:27.144 04:39:23 ftl.ftl_restore -- ftl/common.sh@28 -- # stores=72044cfe-54c3-46a0-8bef-a2899f98d9e0 00:22:27.144 04:39:23 ftl.ftl_restore -- ftl/common.sh@29 -- # for lvs in $stores 00:22:27.144 04:39:23 ftl.ftl_restore -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 72044cfe-54c3-46a0-8bef-a2899f98d9e0 00:22:27.402 04:39:23 ftl.ftl_restore -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:22:27.659 04:39:24 ftl.ftl_restore -- ftl/common.sh@68 -- # lvs=9983f70d-3ef3-4af9-b230-95f9a1cd1683 00:22:27.659 04:39:24 ftl.ftl_restore -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 9983f70d-3ef3-4af9-b230-95f9a1cd1683 00:22:27.659 04:39:24 ftl.ftl_restore -- ftl/restore.sh@43 -- # split_bdev=369a3e5b-c6f7-4032-b3a9-34c9b0699f01 00:22:27.659 04:39:24 ftl.ftl_restore -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:22:27.659 04:39:24 ftl.ftl_restore -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 369a3e5b-c6f7-4032-b3a9-34c9b0699f01 00:22:27.659 04:39:24 ftl.ftl_restore -- ftl/common.sh@35 -- # local name=nvc0 00:22:27.659 04:39:24 ftl.ftl_restore -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:22:27.659 04:39:24 ftl.ftl_restore -- ftl/common.sh@37 -- # local base_bdev=369a3e5b-c6f7-4032-b3a9-34c9b0699f01 00:22:27.659 04:39:24 ftl.ftl_restore -- ftl/common.sh@38 -- # local cache_size= 00:22:27.659 04:39:24 ftl.ftl_restore -- ftl/common.sh@41 -- # get_bdev_size 369a3e5b-c6f7-4032-b3a9-34c9b0699f01 00:22:27.660 04:39:24 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=369a3e5b-c6f7-4032-b3a9-34c9b0699f01 00:22:27.660 04:39:24 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:22:27.660 04:39:24 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:22:27.660 04:39:24 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:22:27.660 04:39:24 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 369a3e5b-c6f7-4032-b3a9-34c9b0699f01 00:22:27.917 04:39:24 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:22:27.917 { 00:22:27.917 "name": "369a3e5b-c6f7-4032-b3a9-34c9b0699f01", 00:22:27.917 "aliases": [ 00:22:27.917 "lvs/nvme0n1p0" 00:22:27.917 ], 00:22:27.917 "product_name": "Logical Volume", 00:22:27.917 "block_size": 4096, 00:22:27.917 "num_blocks": 26476544, 00:22:27.917 "uuid": "369a3e5b-c6f7-4032-b3a9-34c9b0699f01", 00:22:27.917 "assigned_rate_limits": { 00:22:27.917 "rw_ios_per_sec": 0, 00:22:27.917 "rw_mbytes_per_sec": 0, 00:22:27.917 "r_mbytes_per_sec": 0, 00:22:27.917 "w_mbytes_per_sec": 0 00:22:27.917 }, 00:22:27.917 "claimed": false, 00:22:27.917 "zoned": false, 00:22:27.917 "supported_io_types": { 00:22:27.917 "read": true, 00:22:27.917 "write": true, 00:22:27.917 "unmap": true, 00:22:27.917 "flush": false, 00:22:27.917 "reset": true, 00:22:27.917 "nvme_admin": false, 00:22:27.917 "nvme_io": false, 00:22:27.917 "nvme_io_md": false, 00:22:27.917 "write_zeroes": true, 00:22:27.917 "zcopy": false, 00:22:27.917 "get_zone_info": false, 00:22:27.917 "zone_management": false, 00:22:27.917 "zone_append": false, 00:22:27.917 "compare": false, 00:22:27.917 "compare_and_write": false, 00:22:27.917 "abort": false, 00:22:27.917 "seek_hole": true, 00:22:27.917 "seek_data": true, 00:22:27.917 "copy": false, 00:22:27.917 "nvme_iov_md": false 00:22:27.917 }, 00:22:27.917 "driver_specific": { 00:22:27.917 "lvol": { 00:22:27.917 "lvol_store_uuid": "9983f70d-3ef3-4af9-b230-95f9a1cd1683", 00:22:27.917 "base_bdev": "nvme0n1", 00:22:27.917 "thin_provision": true, 00:22:27.917 "num_allocated_clusters": 0, 00:22:27.917 "snapshot": false, 00:22:27.917 "clone": false, 00:22:27.917 "esnap_clone": false 00:22:27.917 } 00:22:27.917 } 00:22:27.917 } 00:22:27.917 ]' 00:22:27.917 04:39:24 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:22:27.917 04:39:24 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:22:27.918 04:39:24 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:22:27.918 04:39:24 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:22:27.918 04:39:24 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:22:27.918 04:39:24 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:22:27.918 04:39:24 ftl.ftl_restore -- ftl/common.sh@41 -- # local base_size=5171 00:22:27.918 04:39:24 ftl.ftl_restore -- ftl/common.sh@44 -- # local nvc_bdev 00:22:27.918 04:39:24 ftl.ftl_restore -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:22:28.178 04:39:24 ftl.ftl_restore -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:22:28.178 04:39:24 ftl.ftl_restore -- ftl/common.sh@47 -- # [[ -z '' ]] 00:22:28.178 04:39:24 ftl.ftl_restore -- ftl/common.sh@48 -- # get_bdev_size 369a3e5b-c6f7-4032-b3a9-34c9b0699f01 00:22:28.178 04:39:24 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=369a3e5b-c6f7-4032-b3a9-34c9b0699f01 00:22:28.178 04:39:24 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:22:28.178 04:39:24 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:22:28.178 04:39:24 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:22:28.178 04:39:24 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 369a3e5b-c6f7-4032-b3a9-34c9b0699f01 00:22:28.437 04:39:24 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:22:28.437 { 00:22:28.437 "name": "369a3e5b-c6f7-4032-b3a9-34c9b0699f01", 00:22:28.437 "aliases": [ 00:22:28.437 "lvs/nvme0n1p0" 00:22:28.437 ], 00:22:28.437 "product_name": "Logical Volume", 00:22:28.437 "block_size": 4096, 00:22:28.437 "num_blocks": 26476544, 00:22:28.437 "uuid": "369a3e5b-c6f7-4032-b3a9-34c9b0699f01", 00:22:28.437 "assigned_rate_limits": { 00:22:28.437 "rw_ios_per_sec": 0, 00:22:28.437 "rw_mbytes_per_sec": 0, 00:22:28.437 "r_mbytes_per_sec": 0, 00:22:28.437 "w_mbytes_per_sec": 0 00:22:28.437 }, 00:22:28.437 "claimed": false, 00:22:28.437 "zoned": false, 00:22:28.437 "supported_io_types": { 00:22:28.437 "read": true, 00:22:28.437 "write": true, 00:22:28.437 "unmap": true, 00:22:28.437 "flush": false, 00:22:28.437 "reset": true, 00:22:28.437 "nvme_admin": false, 00:22:28.437 "nvme_io": false, 00:22:28.437 "nvme_io_md": false, 00:22:28.437 "write_zeroes": true, 00:22:28.437 "zcopy": false, 00:22:28.437 "get_zone_info": false, 00:22:28.437 "zone_management": false, 00:22:28.437 "zone_append": false, 00:22:28.437 "compare": false, 00:22:28.437 "compare_and_write": false, 00:22:28.437 "abort": false, 00:22:28.437 "seek_hole": true, 00:22:28.437 "seek_data": true, 00:22:28.437 "copy": false, 00:22:28.437 "nvme_iov_md": false 00:22:28.437 }, 00:22:28.437 "driver_specific": { 00:22:28.437 "lvol": { 00:22:28.437 "lvol_store_uuid": "9983f70d-3ef3-4af9-b230-95f9a1cd1683", 00:22:28.437 "base_bdev": "nvme0n1", 00:22:28.437 "thin_provision": true, 00:22:28.437 "num_allocated_clusters": 0, 00:22:28.437 "snapshot": false, 00:22:28.437 "clone": false, 00:22:28.437 "esnap_clone": false 00:22:28.437 } 00:22:28.437 } 00:22:28.437 } 00:22:28.437 ]' 00:22:28.437 04:39:24 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:22:28.437 04:39:24 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:22:28.437 04:39:24 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:22:28.437 04:39:24 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:22:28.437 04:39:24 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:22:28.437 04:39:24 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:22:28.437 04:39:24 ftl.ftl_restore -- ftl/common.sh@48 -- # cache_size=5171 00:22:28.437 04:39:24 ftl.ftl_restore -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:22:28.695 04:39:25 ftl.ftl_restore -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:22:28.695 04:39:25 ftl.ftl_restore -- ftl/restore.sh@48 -- # get_bdev_size 369a3e5b-c6f7-4032-b3a9-34c9b0699f01 00:22:28.695 04:39:25 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=369a3e5b-c6f7-4032-b3a9-34c9b0699f01 00:22:28.695 04:39:25 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:22:28.695 04:39:25 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:22:28.695 04:39:25 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:22:28.695 04:39:25 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 369a3e5b-c6f7-4032-b3a9-34c9b0699f01 00:22:28.953 04:39:25 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:22:28.953 { 00:22:28.953 "name": "369a3e5b-c6f7-4032-b3a9-34c9b0699f01", 00:22:28.953 "aliases": [ 00:22:28.953 "lvs/nvme0n1p0" 00:22:28.953 ], 00:22:28.953 "product_name": "Logical Volume", 00:22:28.953 "block_size": 4096, 00:22:28.953 "num_blocks": 26476544, 00:22:28.953 "uuid": "369a3e5b-c6f7-4032-b3a9-34c9b0699f01", 00:22:28.953 "assigned_rate_limits": { 00:22:28.953 "rw_ios_per_sec": 0, 00:22:28.953 "rw_mbytes_per_sec": 0, 00:22:28.953 "r_mbytes_per_sec": 0, 00:22:28.953 "w_mbytes_per_sec": 0 00:22:28.953 }, 00:22:28.953 "claimed": false, 00:22:28.953 "zoned": false, 00:22:28.953 "supported_io_types": { 00:22:28.953 "read": true, 00:22:28.953 "write": true, 00:22:28.953 "unmap": true, 00:22:28.953 "flush": false, 00:22:28.953 "reset": true, 00:22:28.953 "nvme_admin": false, 00:22:28.953 "nvme_io": false, 00:22:28.953 "nvme_io_md": false, 00:22:28.953 "write_zeroes": true, 00:22:28.953 "zcopy": false, 00:22:28.953 "get_zone_info": false, 00:22:28.953 "zone_management": false, 00:22:28.953 "zone_append": false, 00:22:28.953 "compare": false, 00:22:28.953 "compare_and_write": false, 00:22:28.953 "abort": false, 00:22:28.953 "seek_hole": true, 00:22:28.953 "seek_data": true, 00:22:28.953 "copy": false, 00:22:28.953 "nvme_iov_md": false 00:22:28.953 }, 00:22:28.953 "driver_specific": { 00:22:28.953 "lvol": { 00:22:28.953 "lvol_store_uuid": "9983f70d-3ef3-4af9-b230-95f9a1cd1683", 00:22:28.953 "base_bdev": "nvme0n1", 00:22:28.953 "thin_provision": true, 00:22:28.953 "num_allocated_clusters": 0, 00:22:28.953 "snapshot": false, 00:22:28.953 "clone": false, 00:22:28.953 "esnap_clone": false 00:22:28.953 } 00:22:28.953 } 00:22:28.953 } 00:22:28.953 ]' 00:22:28.953 04:39:25 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:22:28.953 04:39:25 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:22:28.953 04:39:25 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:22:28.953 04:39:25 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:22:28.953 04:39:25 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:22:28.953 04:39:25 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:22:28.954 04:39:25 ftl.ftl_restore -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:22:28.954 04:39:25 ftl.ftl_restore -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 369a3e5b-c6f7-4032-b3a9-34c9b0699f01 --l2p_dram_limit 10' 00:22:28.954 04:39:25 ftl.ftl_restore -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:22:28.954 04:39:25 ftl.ftl_restore -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:22:28.954 04:39:25 ftl.ftl_restore -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:22:28.954 04:39:25 ftl.ftl_restore -- ftl/restore.sh@54 -- # '[' '' -eq 1 ']' 00:22:28.954 /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh: line 54: [: : integer expression expected 00:22:28.954 04:39:25 ftl.ftl_restore -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 369a3e5b-c6f7-4032-b3a9-34c9b0699f01 --l2p_dram_limit 10 -c nvc0n1p0 00:22:29.211 [2024-11-27 04:39:25.634899] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:29.211 [2024-11-27 04:39:25.634957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:22:29.211 [2024-11-27 04:39:25.634974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:29.211 [2024-11-27 04:39:25.634983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.211 [2024-11-27 04:39:25.635044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:29.211 [2024-11-27 04:39:25.635055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:29.211 [2024-11-27 04:39:25.635065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:22:29.211 [2024-11-27 04:39:25.635072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.211 [2024-11-27 04:39:25.635095] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:22:29.211 [2024-11-27 04:39:25.635894] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:22:29.211 [2024-11-27 04:39:25.635914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:29.211 [2024-11-27 04:39:25.635922] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:29.211 [2024-11-27 04:39:25.635932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.823 ms 00:22:29.211 [2024-11-27 04:39:25.635940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.211 [2024-11-27 04:39:25.636005] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 5a28c5f8-1fbc-4d83-88bc-ea3ff92fe155 00:22:29.211 [2024-11-27 04:39:25.637115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:29.211 [2024-11-27 04:39:25.637248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:22:29.211 [2024-11-27 04:39:25.637264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:22:29.211 [2024-11-27 04:39:25.637274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.211 [2024-11-27 04:39:25.642650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:29.211 [2024-11-27 04:39:25.642773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:29.211 [2024-11-27 04:39:25.642830] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.320 ms 00:22:29.211 [2024-11-27 04:39:25.642856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.211 [2024-11-27 04:39:25.642958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:29.211 [2024-11-27 04:39:25.642996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:29.211 [2024-11-27 04:39:25.643017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:22:29.211 [2024-11-27 04:39:25.643077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.212 [2024-11-27 04:39:25.643139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:29.212 [2024-11-27 04:39:25.643172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:22:29.212 [2024-11-27 04:39:25.643239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:22:29.212 [2024-11-27 04:39:25.643294] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.212 [2024-11-27 04:39:25.643327] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:29.212 [2024-11-27 04:39:25.646980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:29.212 [2024-11-27 04:39:25.647083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:29.212 [2024-11-27 04:39:25.647138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.655 ms 00:22:29.212 [2024-11-27 04:39:25.647160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.212 [2024-11-27 04:39:25.647206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:29.212 [2024-11-27 04:39:25.647254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:22:29.212 [2024-11-27 04:39:25.647276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:22:29.212 [2024-11-27 04:39:25.647358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.212 [2024-11-27 04:39:25.647418] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:22:29.212 [2024-11-27 04:39:25.647645] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:22:29.212 [2024-11-27 04:39:25.647737] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:22:29.212 [2024-11-27 04:39:25.647797] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:22:29.212 [2024-11-27 04:39:25.647892] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:22:29.212 [2024-11-27 04:39:25.647925] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:22:29.212 [2024-11-27 04:39:25.647955] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:22:29.212 [2024-11-27 04:39:25.647977] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:22:29.212 [2024-11-27 04:39:25.647996] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:22:29.212 [2024-11-27 04:39:25.648122] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:22:29.212 [2024-11-27 04:39:25.648149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:29.212 [2024-11-27 04:39:25.648175] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:22:29.212 [2024-11-27 04:39:25.648199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.732 ms 00:22:29.212 [2024-11-27 04:39:25.648217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.212 [2024-11-27 04:39:25.648318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:29.212 [2024-11-27 04:39:25.648339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:22:29.212 [2024-11-27 04:39:25.648360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:22:29.212 [2024-11-27 04:39:25.648377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.212 [2024-11-27 04:39:25.648492] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:22:29.212 [2024-11-27 04:39:25.648515] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:22:29.212 [2024-11-27 04:39:25.648536] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:29.212 [2024-11-27 04:39:25.648556] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:29.212 [2024-11-27 04:39:25.648606] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:22:29.212 [2024-11-27 04:39:25.648628] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:22:29.212 [2024-11-27 04:39:25.648649] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:22:29.212 [2024-11-27 04:39:25.648700] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:22:29.212 [2024-11-27 04:39:25.648735] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:22:29.212 [2024-11-27 04:39:25.648756] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:29.212 [2024-11-27 04:39:25.648797] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:22:29.212 [2024-11-27 04:39:25.648818] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:22:29.212 [2024-11-27 04:39:25.648838] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:29.212 [2024-11-27 04:39:25.648868] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:22:29.212 [2024-11-27 04:39:25.648983] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:22:29.212 [2024-11-27 04:39:25.649005] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:29.212 [2024-11-27 04:39:25.649028] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:22:29.212 [2024-11-27 04:39:25.649046] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:22:29.212 [2024-11-27 04:39:25.649066] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:29.212 [2024-11-27 04:39:25.649135] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:22:29.212 [2024-11-27 04:39:25.649161] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:22:29.212 [2024-11-27 04:39:25.649179] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:29.212 [2024-11-27 04:39:25.649199] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:22:29.212 [2024-11-27 04:39:25.649218] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:22:29.212 [2024-11-27 04:39:25.649237] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:29.212 [2024-11-27 04:39:25.649297] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:22:29.212 [2024-11-27 04:39:25.649316] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:22:29.212 [2024-11-27 04:39:25.649334] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:29.212 [2024-11-27 04:39:25.649353] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:22:29.212 [2024-11-27 04:39:25.649371] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:22:29.212 [2024-11-27 04:39:25.649422] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:29.212 [2024-11-27 04:39:25.649445] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:22:29.212 [2024-11-27 04:39:25.649498] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:22:29.212 [2024-11-27 04:39:25.649519] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:29.212 [2024-11-27 04:39:25.649560] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:22:29.212 [2024-11-27 04:39:25.649581] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:22:29.212 [2024-11-27 04:39:25.649601] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:29.212 [2024-11-27 04:39:25.649619] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:22:29.212 [2024-11-27 04:39:25.649668] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:22:29.212 [2024-11-27 04:39:25.649688] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:29.212 [2024-11-27 04:39:25.649709] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:22:29.212 [2024-11-27 04:39:25.649743] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:22:29.212 [2024-11-27 04:39:25.649787] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:29.212 [2024-11-27 04:39:25.649809] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:22:29.212 [2024-11-27 04:39:25.649862] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:22:29.212 [2024-11-27 04:39:25.649885] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:29.212 [2024-11-27 04:39:25.649906] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:29.212 [2024-11-27 04:39:25.649947] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:22:29.212 [2024-11-27 04:39:25.649972] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:22:29.212 [2024-11-27 04:39:25.649991] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:22:29.212 [2024-11-27 04:39:25.650084] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:22:29.212 [2024-11-27 04:39:25.650107] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:22:29.212 [2024-11-27 04:39:25.650127] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:22:29.212 [2024-11-27 04:39:25.650149] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:22:29.212 [2024-11-27 04:39:25.650184] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:29.212 [2024-11-27 04:39:25.650290] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:22:29.212 [2024-11-27 04:39:25.650322] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:22:29.212 [2024-11-27 04:39:25.650351] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:22:29.212 [2024-11-27 04:39:25.650381] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:22:29.212 [2024-11-27 04:39:25.650488] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:22:29.212 [2024-11-27 04:39:25.650520] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:22:29.212 [2024-11-27 04:39:25.650549] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:22:29.212 [2024-11-27 04:39:25.650579] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:22:29.212 [2024-11-27 04:39:25.650689] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:22:29.212 [2024-11-27 04:39:25.650732] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:22:29.212 [2024-11-27 04:39:25.650763] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:22:29.212 [2024-11-27 04:39:25.650794] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:22:29.213 [2024-11-27 04:39:25.650887] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:22:29.213 [2024-11-27 04:39:25.650921] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:22:29.213 [2024-11-27 04:39:25.650973] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:22:29.213 [2024-11-27 04:39:25.651005] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:29.213 [2024-11-27 04:39:25.651059] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:29.213 [2024-11-27 04:39:25.651091] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:22:29.213 [2024-11-27 04:39:25.651145] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:22:29.213 [2024-11-27 04:39:25.651204] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:22:29.213 [2024-11-27 04:39:25.651236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:29.213 [2024-11-27 04:39:25.651279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:22:29.213 [2024-11-27 04:39:25.651302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.814 ms 00:22:29.213 [2024-11-27 04:39:25.651351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.213 [2024-11-27 04:39:25.651425] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:22:29.213 [2024-11-27 04:39:25.651491] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:22:31.128 [2024-11-27 04:39:27.692539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:31.128 [2024-11-27 04:39:27.692778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:22:31.128 [2024-11-27 04:39:27.692861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2041.104 ms 00:22:31.128 [2024-11-27 04:39:27.692891] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:31.387 [2024-11-27 04:39:27.718918] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:31.387 [2024-11-27 04:39:27.719096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:31.387 [2024-11-27 04:39:27.719114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.772 ms 00:22:31.387 [2024-11-27 04:39:27.719125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:31.387 [2024-11-27 04:39:27.719265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:31.387 [2024-11-27 04:39:27.719277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:22:31.387 [2024-11-27 04:39:27.719286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:22:31.387 [2024-11-27 04:39:27.719300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:31.387 [2024-11-27 04:39:27.749772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:31.387 [2024-11-27 04:39:27.749821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:31.387 [2024-11-27 04:39:27.749831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.437 ms 00:22:31.387 [2024-11-27 04:39:27.749841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:31.387 [2024-11-27 04:39:27.749883] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:31.387 [2024-11-27 04:39:27.749893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:31.387 [2024-11-27 04:39:27.749901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:22:31.387 [2024-11-27 04:39:27.749916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:31.387 [2024-11-27 04:39:27.750261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:31.387 [2024-11-27 04:39:27.750280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:31.387 [2024-11-27 04:39:27.750289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.292 ms 00:22:31.387 [2024-11-27 04:39:27.750298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:31.387 [2024-11-27 04:39:27.750408] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:31.387 [2024-11-27 04:39:27.750421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:31.387 [2024-11-27 04:39:27.750429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.091 ms 00:22:31.387 [2024-11-27 04:39:27.750439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:31.387 [2024-11-27 04:39:27.764501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:31.387 [2024-11-27 04:39:27.764661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:31.387 [2024-11-27 04:39:27.764677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.042 ms 00:22:31.387 [2024-11-27 04:39:27.764686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:31.387 [2024-11-27 04:39:27.784097] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:22:31.387 [2024-11-27 04:39:27.787026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:31.387 [2024-11-27 04:39:27.787063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:22:31.387 [2024-11-27 04:39:27.787080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.234 ms 00:22:31.387 [2024-11-27 04:39:27.787089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:31.387 [2024-11-27 04:39:27.847757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:31.387 [2024-11-27 04:39:27.847814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:22:31.387 [2024-11-27 04:39:27.847830] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 60.613 ms 00:22:31.387 [2024-11-27 04:39:27.847839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:31.387 [2024-11-27 04:39:27.848031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:31.387 [2024-11-27 04:39:27.848042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:22:31.387 [2024-11-27 04:39:27.848055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.148 ms 00:22:31.387 [2024-11-27 04:39:27.848062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:31.387 [2024-11-27 04:39:27.871383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:31.387 [2024-11-27 04:39:27.871427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:22:31.387 [2024-11-27 04:39:27.871441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.259 ms 00:22:31.387 [2024-11-27 04:39:27.871449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:31.387 [2024-11-27 04:39:27.893402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:31.387 [2024-11-27 04:39:27.893557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:22:31.387 [2024-11-27 04:39:27.893579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.909 ms 00:22:31.387 [2024-11-27 04:39:27.893587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:31.387 [2024-11-27 04:39:27.894158] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:31.387 [2024-11-27 04:39:27.894175] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:22:31.387 [2024-11-27 04:39:27.894188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.537 ms 00:22:31.387 [2024-11-27 04:39:27.894195] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:31.387 [2024-11-27 04:39:27.958008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:31.387 [2024-11-27 04:39:27.958067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:22:31.387 [2024-11-27 04:39:27.958086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 63.756 ms 00:22:31.387 [2024-11-27 04:39:27.958094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:31.646 [2024-11-27 04:39:27.981793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:31.646 [2024-11-27 04:39:27.981836] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:22:31.646 [2024-11-27 04:39:27.981849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.625 ms 00:22:31.646 [2024-11-27 04:39:27.981857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:31.646 [2024-11-27 04:39:28.004667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:31.646 [2024-11-27 04:39:28.004815] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:22:31.646 [2024-11-27 04:39:28.004835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.769 ms 00:22:31.646 [2024-11-27 04:39:28.004842] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:31.646 [2024-11-27 04:39:28.028417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:31.646 [2024-11-27 04:39:28.028539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:22:31.646 [2024-11-27 04:39:28.028558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.534 ms 00:22:31.646 [2024-11-27 04:39:28.028565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:31.646 [2024-11-27 04:39:28.028601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:31.646 [2024-11-27 04:39:28.028610] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:22:31.646 [2024-11-27 04:39:28.028623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:22:31.646 [2024-11-27 04:39:28.028630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:31.646 [2024-11-27 04:39:28.028705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:31.646 [2024-11-27 04:39:28.028717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:22:31.646 [2024-11-27 04:39:28.028745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:22:31.646 [2024-11-27 04:39:28.028753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:31.646 [2024-11-27 04:39:28.029623] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2394.320 ms, result 0 00:22:31.646 { 00:22:31.646 "name": "ftl0", 00:22:31.646 "uuid": "5a28c5f8-1fbc-4d83-88bc-ea3ff92fe155" 00:22:31.646 } 00:22:31.646 04:39:28 ftl.ftl_restore -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:22:31.646 04:39:28 ftl.ftl_restore -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:22:31.904 04:39:28 ftl.ftl_restore -- ftl/restore.sh@63 -- # echo ']}' 00:22:31.904 04:39:28 ftl.ftl_restore -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:22:31.904 [2024-11-27 04:39:28.445219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:31.904 [2024-11-27 04:39:28.445269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:22:31.904 [2024-11-27 04:39:28.445281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:22:31.904 [2024-11-27 04:39:28.445290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:31.904 [2024-11-27 04:39:28.445313] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:31.904 [2024-11-27 04:39:28.447910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:31.904 [2024-11-27 04:39:28.447939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:22:31.904 [2024-11-27 04:39:28.447951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.578 ms 00:22:31.904 [2024-11-27 04:39:28.447960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:31.904 [2024-11-27 04:39:28.448220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:31.904 [2024-11-27 04:39:28.448238] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:22:31.904 [2024-11-27 04:39:28.448248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.232 ms 00:22:31.904 [2024-11-27 04:39:28.448256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:31.904 [2024-11-27 04:39:28.451484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:31.904 [2024-11-27 04:39:28.451604] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:22:31.904 [2024-11-27 04:39:28.451622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.212 ms 00:22:31.904 [2024-11-27 04:39:28.451630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:31.904 [2024-11-27 04:39:28.457844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:31.904 [2024-11-27 04:39:28.457937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:22:31.904 [2024-11-27 04:39:28.457994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.189 ms 00:22:31.904 [2024-11-27 04:39:28.458017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:31.904 [2024-11-27 04:39:28.481475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:31.904 [2024-11-27 04:39:28.481598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:22:31.904 [2024-11-27 04:39:28.481618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.338 ms 00:22:31.904 [2024-11-27 04:39:28.481625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:32.163 [2024-11-27 04:39:28.495864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:32.163 [2024-11-27 04:39:28.495980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:22:32.163 [2024-11-27 04:39:28.496000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.198 ms 00:22:32.163 [2024-11-27 04:39:28.496008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:32.163 [2024-11-27 04:39:28.496166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:32.163 [2024-11-27 04:39:28.496177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:22:32.163 [2024-11-27 04:39:28.496187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.124 ms 00:22:32.163 [2024-11-27 04:39:28.496195] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:32.164 [2024-11-27 04:39:28.519162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:32.164 [2024-11-27 04:39:28.519273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:22:32.164 [2024-11-27 04:39:28.519291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.946 ms 00:22:32.164 [2024-11-27 04:39:28.519298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:32.164 [2024-11-27 04:39:28.541615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:32.164 [2024-11-27 04:39:28.541647] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:22:32.164 [2024-11-27 04:39:28.541659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.282 ms 00:22:32.164 [2024-11-27 04:39:28.541666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:32.164 [2024-11-27 04:39:28.563835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:32.164 [2024-11-27 04:39:28.563868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:22:32.164 [2024-11-27 04:39:28.563879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.132 ms 00:22:32.164 [2024-11-27 04:39:28.563887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:32.164 [2024-11-27 04:39:28.586147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:32.164 [2024-11-27 04:39:28.586181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:22:32.164 [2024-11-27 04:39:28.586193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.187 ms 00:22:32.164 [2024-11-27 04:39:28.586201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:32.164 [2024-11-27 04:39:28.586236] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:22:32.164 [2024-11-27 04:39:28.586250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:22:32.164 [2024-11-27 04:39:28.586265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:22:32.164 [2024-11-27 04:39:28.586273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:22:32.164 [2024-11-27 04:39:28.586282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:32.164 [2024-11-27 04:39:28.586290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:32.164 [2024-11-27 04:39:28.586299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:32.164 [2024-11-27 04:39:28.586307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:32.164 [2024-11-27 04:39:28.586317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:32.164 [2024-11-27 04:39:28.586325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:32.164 [2024-11-27 04:39:28.586334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:32.164 [2024-11-27 04:39:28.586341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:32.164 [2024-11-27 04:39:28.586351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:32.164 [2024-11-27 04:39:28.586358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:32.164 [2024-11-27 04:39:28.586367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:32.164 [2024-11-27 04:39:28.586374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:32.164 [2024-11-27 04:39:28.586383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:32.164 [2024-11-27 04:39:28.586390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:32.164 [2024-11-27 04:39:28.586400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:32.164 [2024-11-27 04:39:28.586407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:22:32.164 [2024-11-27 04:39:28.586416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:22:32.164 [2024-11-27 04:39:28.586423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:22:32.164 [2024-11-27 04:39:28.586432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:22:32.164 [2024-11-27 04:39:28.586439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:22:32.164 [2024-11-27 04:39:28.586450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:22:32.164 [2024-11-27 04:39:28.586457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:22:32.164 [2024-11-27 04:39:28.586466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:22:32.164 [2024-11-27 04:39:28.586474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:22:32.164 [2024-11-27 04:39:28.586483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:22:32.164 [2024-11-27 04:39:28.586491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:22:32.164 [2024-11-27 04:39:28.586504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:22:32.164 [2024-11-27 04:39:28.586512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:22:32.164 [2024-11-27 04:39:28.586522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:22:32.164 [2024-11-27 04:39:28.586529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:22:32.164 [2024-11-27 04:39:28.586538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:22:32.164 [2024-11-27 04:39:28.586545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:22:32.164 [2024-11-27 04:39:28.586554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:22:32.164 [2024-11-27 04:39:28.586562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:22:32.164 [2024-11-27 04:39:28.586570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:22:32.164 [2024-11-27 04:39:28.586578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:22:32.164 [2024-11-27 04:39:28.586588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:22:32.164 [2024-11-27 04:39:28.586595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:22:32.164 [2024-11-27 04:39:28.586603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:22:32.164 [2024-11-27 04:39:28.586611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:22:32.164 [2024-11-27 04:39:28.586620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:22:32.164 [2024-11-27 04:39:28.586627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:22:32.165 [2024-11-27 04:39:28.586636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:22:32.165 [2024-11-27 04:39:28.586644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:22:32.165 [2024-11-27 04:39:28.586653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:22:32.165 [2024-11-27 04:39:28.586660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:22:32.165 [2024-11-27 04:39:28.586669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:22:32.165 [2024-11-27 04:39:28.586677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:22:32.165 [2024-11-27 04:39:28.586685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:22:32.165 [2024-11-27 04:39:28.586693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:22:32.165 [2024-11-27 04:39:28.586702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:22:32.165 [2024-11-27 04:39:28.586709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:22:32.165 [2024-11-27 04:39:28.586742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:22:32.165 [2024-11-27 04:39:28.586750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:22:32.165 [2024-11-27 04:39:28.586759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:22:32.165 [2024-11-27 04:39:28.586767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:22:32.165 [2024-11-27 04:39:28.586776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:22:32.165 [2024-11-27 04:39:28.586783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:22:32.165 [2024-11-27 04:39:28.586793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:22:32.165 [2024-11-27 04:39:28.586801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:22:32.165 [2024-11-27 04:39:28.586810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:22:32.165 [2024-11-27 04:39:28.586817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:22:32.165 [2024-11-27 04:39:28.586826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:22:32.165 [2024-11-27 04:39:28.586833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:22:32.165 [2024-11-27 04:39:28.586842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:22:32.165 [2024-11-27 04:39:28.586850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:22:32.165 [2024-11-27 04:39:28.586860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:22:32.165 [2024-11-27 04:39:28.586867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:22:32.165 [2024-11-27 04:39:28.586878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:22:32.165 [2024-11-27 04:39:28.586885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:22:32.165 [2024-11-27 04:39:28.586894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:22:32.165 [2024-11-27 04:39:28.586902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:22:32.165 [2024-11-27 04:39:28.586911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:22:32.165 [2024-11-27 04:39:28.586918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:22:32.165 [2024-11-27 04:39:28.586926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:22:32.165 [2024-11-27 04:39:28.586933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:22:32.165 [2024-11-27 04:39:28.586942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:22:32.165 [2024-11-27 04:39:28.586949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:22:32.165 [2024-11-27 04:39:28.586958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:22:32.165 [2024-11-27 04:39:28.586966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:22:32.165 [2024-11-27 04:39:28.586975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:22:32.165 [2024-11-27 04:39:28.586982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:22:32.165 [2024-11-27 04:39:28.586991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:22:32.165 [2024-11-27 04:39:28.586998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:22:32.165 [2024-11-27 04:39:28.587009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:22:32.165 [2024-11-27 04:39:28.587016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:22:32.165 [2024-11-27 04:39:28.587024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:22:32.165 [2024-11-27 04:39:28.587032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:22:32.165 [2024-11-27 04:39:28.587041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:22:32.165 [2024-11-27 04:39:28.587076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:22:32.165 [2024-11-27 04:39:28.587087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:22:32.165 [2024-11-27 04:39:28.587094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:22:32.165 [2024-11-27 04:39:28.587104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:22:32.165 [2024-11-27 04:39:28.587111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:22:32.165 [2024-11-27 04:39:28.587119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:22:32.165 [2024-11-27 04:39:28.587127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:22:32.165 [2024-11-27 04:39:28.587135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:22:32.165 [2024-11-27 04:39:28.587151] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:22:32.165 [2024-11-27 04:39:28.587160] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 5a28c5f8-1fbc-4d83-88bc-ea3ff92fe155 00:22:32.165 [2024-11-27 04:39:28.587167] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:22:32.165 [2024-11-27 04:39:28.587177] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:22:32.165 [2024-11-27 04:39:28.587187] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:22:32.165 [2024-11-27 04:39:28.587196] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:22:32.166 [2024-11-27 04:39:28.587203] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:22:32.166 [2024-11-27 04:39:28.587211] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:22:32.166 [2024-11-27 04:39:28.587218] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:22:32.166 [2024-11-27 04:39:28.587226] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:22:32.166 [2024-11-27 04:39:28.587232] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:22:32.166 [2024-11-27 04:39:28.587240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:32.166 [2024-11-27 04:39:28.587248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:22:32.166 [2024-11-27 04:39:28.587257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.005 ms 00:22:32.166 [2024-11-27 04:39:28.587266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:32.166 [2024-11-27 04:39:28.599627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:32.166 [2024-11-27 04:39:28.599659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:22:32.166 [2024-11-27 04:39:28.599672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.326 ms 00:22:32.166 [2024-11-27 04:39:28.599681] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:32.166 [2024-11-27 04:39:28.600058] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:32.166 [2024-11-27 04:39:28.600080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:22:32.166 [2024-11-27 04:39:28.600092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.351 ms 00:22:32.166 [2024-11-27 04:39:28.600099] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:32.166 [2024-11-27 04:39:28.641547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:32.166 [2024-11-27 04:39:28.641719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:32.166 [2024-11-27 04:39:28.641748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:32.166 [2024-11-27 04:39:28.641757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:32.166 [2024-11-27 04:39:28.641824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:32.166 [2024-11-27 04:39:28.641833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:32.166 [2024-11-27 04:39:28.641845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:32.166 [2024-11-27 04:39:28.641852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:32.166 [2024-11-27 04:39:28.641930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:32.166 [2024-11-27 04:39:28.641940] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:32.166 [2024-11-27 04:39:28.641949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:32.166 [2024-11-27 04:39:28.641956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:32.166 [2024-11-27 04:39:28.641976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:32.166 [2024-11-27 04:39:28.641984] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:32.166 [2024-11-27 04:39:28.641993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:32.166 [2024-11-27 04:39:28.642002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:32.166 [2024-11-27 04:39:28.719024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:32.166 [2024-11-27 04:39:28.719069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:32.166 [2024-11-27 04:39:28.719082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:32.166 [2024-11-27 04:39:28.719090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:32.424 [2024-11-27 04:39:28.782294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:32.424 [2024-11-27 04:39:28.782338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:32.424 [2024-11-27 04:39:28.782354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:32.424 [2024-11-27 04:39:28.782362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:32.425 [2024-11-27 04:39:28.782447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:32.425 [2024-11-27 04:39:28.782457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:32.425 [2024-11-27 04:39:28.782466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:32.425 [2024-11-27 04:39:28.782473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:32.425 [2024-11-27 04:39:28.782520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:32.425 [2024-11-27 04:39:28.782528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:32.425 [2024-11-27 04:39:28.782537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:32.425 [2024-11-27 04:39:28.782545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:32.425 [2024-11-27 04:39:28.782632] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:32.425 [2024-11-27 04:39:28.782641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:32.425 [2024-11-27 04:39:28.782650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:32.425 [2024-11-27 04:39:28.782657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:32.425 [2024-11-27 04:39:28.782688] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:32.425 [2024-11-27 04:39:28.782696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:22:32.425 [2024-11-27 04:39:28.782705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:32.425 [2024-11-27 04:39:28.782712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:32.425 [2024-11-27 04:39:28.782773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:32.425 [2024-11-27 04:39:28.782783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:32.425 [2024-11-27 04:39:28.782792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:32.425 [2024-11-27 04:39:28.782800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:32.425 [2024-11-27 04:39:28.782843] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:32.425 [2024-11-27 04:39:28.782852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:32.425 [2024-11-27 04:39:28.782862] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:32.425 [2024-11-27 04:39:28.782869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:32.425 [2024-11-27 04:39:28.782994] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 337.743 ms, result 0 00:22:32.425 true 00:22:32.425 04:39:28 ftl.ftl_restore -- ftl/restore.sh@66 -- # killprocess 76968 00:22:32.425 04:39:28 ftl.ftl_restore -- common/autotest_common.sh@954 -- # '[' -z 76968 ']' 00:22:32.425 04:39:28 ftl.ftl_restore -- common/autotest_common.sh@958 -- # kill -0 76968 00:22:32.425 04:39:28 ftl.ftl_restore -- common/autotest_common.sh@959 -- # uname 00:22:32.425 04:39:28 ftl.ftl_restore -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:32.425 04:39:28 ftl.ftl_restore -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76968 00:22:32.425 04:39:28 ftl.ftl_restore -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:32.425 04:39:28 ftl.ftl_restore -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:32.425 04:39:28 ftl.ftl_restore -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76968' 00:22:32.425 killing process with pid 76968 00:22:32.425 04:39:28 ftl.ftl_restore -- common/autotest_common.sh@973 -- # kill 76968 00:22:32.425 04:39:28 ftl.ftl_restore -- common/autotest_common.sh@978 -- # wait 76968 00:22:40.617 04:39:37 ftl.ftl_restore -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:22:44.836 262144+0 records in 00:22:44.836 262144+0 records out 00:22:44.836 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 4.27674 s, 251 MB/s 00:22:44.836 04:39:41 ftl.ftl_restore -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:22:46.747 04:39:43 ftl.ftl_restore -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:22:46.747 [2024-11-27 04:39:43.247759] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:22:46.747 [2024-11-27 04:39:43.248035] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77183 ] 00:22:47.007 [2024-11-27 04:39:43.402055] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:47.007 [2024-11-27 04:39:43.502902] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:47.267 [2024-11-27 04:39:43.757944] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:47.267 [2024-11-27 04:39:43.758013] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:47.529 [2024-11-27 04:39:43.911272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:47.529 [2024-11-27 04:39:43.911330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:22:47.529 [2024-11-27 04:39:43.911343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:47.529 [2024-11-27 04:39:43.911351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.529 [2024-11-27 04:39:43.911396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:47.529 [2024-11-27 04:39:43.911409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:47.529 [2024-11-27 04:39:43.911417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:22:47.529 [2024-11-27 04:39:43.911424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.529 [2024-11-27 04:39:43.911443] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:22:47.529 [2024-11-27 04:39:43.912175] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:22:47.529 [2024-11-27 04:39:43.912200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:47.529 [2024-11-27 04:39:43.912208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:47.529 [2024-11-27 04:39:43.912216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.762 ms 00:22:47.529 [2024-11-27 04:39:43.912223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.529 [2024-11-27 04:39:43.913358] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:22:47.529 [2024-11-27 04:39:43.925369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:47.529 [2024-11-27 04:39:43.925406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:22:47.529 [2024-11-27 04:39:43.925418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.013 ms 00:22:47.529 [2024-11-27 04:39:43.925427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.529 [2024-11-27 04:39:43.925492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:47.529 [2024-11-27 04:39:43.925501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:22:47.529 [2024-11-27 04:39:43.925509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:22:47.529 [2024-11-27 04:39:43.925517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.529 [2024-11-27 04:39:43.930342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:47.529 [2024-11-27 04:39:43.930381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:47.529 [2024-11-27 04:39:43.930391] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.768 ms 00:22:47.529 [2024-11-27 04:39:43.930407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.529 [2024-11-27 04:39:43.930496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:47.529 [2024-11-27 04:39:43.930505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:47.529 [2024-11-27 04:39:43.930513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:22:47.529 [2024-11-27 04:39:43.930520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.529 [2024-11-27 04:39:43.930559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:47.529 [2024-11-27 04:39:43.930568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:22:47.529 [2024-11-27 04:39:43.930576] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:22:47.529 [2024-11-27 04:39:43.930584] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.529 [2024-11-27 04:39:43.930607] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:47.529 [2024-11-27 04:39:43.934033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:47.529 [2024-11-27 04:39:43.934062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:47.529 [2024-11-27 04:39:43.934074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.431 ms 00:22:47.529 [2024-11-27 04:39:43.934081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.529 [2024-11-27 04:39:43.934107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:47.529 [2024-11-27 04:39:43.934115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:22:47.530 [2024-11-27 04:39:43.934123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:22:47.530 [2024-11-27 04:39:43.934130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.530 [2024-11-27 04:39:43.934149] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:22:47.530 [2024-11-27 04:39:43.934166] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:22:47.530 [2024-11-27 04:39:43.934199] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:22:47.530 [2024-11-27 04:39:43.934215] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:22:47.530 [2024-11-27 04:39:43.934317] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:22:47.530 [2024-11-27 04:39:43.934327] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:22:47.530 [2024-11-27 04:39:43.934336] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:22:47.530 [2024-11-27 04:39:43.934346] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:22:47.530 [2024-11-27 04:39:43.934354] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:22:47.530 [2024-11-27 04:39:43.934362] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:22:47.530 [2024-11-27 04:39:43.934370] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:22:47.530 [2024-11-27 04:39:43.934379] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:22:47.530 [2024-11-27 04:39:43.934387] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:22:47.530 [2024-11-27 04:39:43.934394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:47.530 [2024-11-27 04:39:43.934401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:22:47.530 [2024-11-27 04:39:43.934409] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.247 ms 00:22:47.530 [2024-11-27 04:39:43.934416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.530 [2024-11-27 04:39:43.934497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:47.530 [2024-11-27 04:39:43.934505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:22:47.530 [2024-11-27 04:39:43.934512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:22:47.530 [2024-11-27 04:39:43.934519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.530 [2024-11-27 04:39:43.934633] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:22:47.530 [2024-11-27 04:39:43.934643] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:22:47.530 [2024-11-27 04:39:43.934652] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:47.530 [2024-11-27 04:39:43.934659] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:47.530 [2024-11-27 04:39:43.934666] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:22:47.530 [2024-11-27 04:39:43.934673] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:22:47.530 [2024-11-27 04:39:43.934679] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:22:47.530 [2024-11-27 04:39:43.934687] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:22:47.530 [2024-11-27 04:39:43.934694] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:22:47.530 [2024-11-27 04:39:43.934700] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:47.530 [2024-11-27 04:39:43.934706] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:22:47.530 [2024-11-27 04:39:43.934713] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:22:47.530 [2024-11-27 04:39:43.934720] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:47.530 [2024-11-27 04:39:43.934762] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:22:47.530 [2024-11-27 04:39:43.934769] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:22:47.530 [2024-11-27 04:39:43.934776] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:47.530 [2024-11-27 04:39:43.934782] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:22:47.530 [2024-11-27 04:39:43.934789] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:22:47.530 [2024-11-27 04:39:43.934795] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:47.530 [2024-11-27 04:39:43.934802] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:22:47.530 [2024-11-27 04:39:43.934809] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:22:47.530 [2024-11-27 04:39:43.934815] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:47.530 [2024-11-27 04:39:43.934827] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:22:47.530 [2024-11-27 04:39:43.934834] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:22:47.530 [2024-11-27 04:39:43.934840] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:47.530 [2024-11-27 04:39:43.934846] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:22:47.530 [2024-11-27 04:39:43.934853] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:22:47.530 [2024-11-27 04:39:43.934859] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:47.530 [2024-11-27 04:39:43.934865] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:22:47.530 [2024-11-27 04:39:43.934872] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:22:47.530 [2024-11-27 04:39:43.934878] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:47.530 [2024-11-27 04:39:43.934885] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:22:47.530 [2024-11-27 04:39:43.934891] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:22:47.530 [2024-11-27 04:39:43.934898] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:47.530 [2024-11-27 04:39:43.934904] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:22:47.530 [2024-11-27 04:39:43.934910] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:22:47.530 [2024-11-27 04:39:43.934916] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:47.530 [2024-11-27 04:39:43.934923] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:22:47.530 [2024-11-27 04:39:43.934930] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:22:47.530 [2024-11-27 04:39:43.934935] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:47.530 [2024-11-27 04:39:43.934942] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:22:47.530 [2024-11-27 04:39:43.934947] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:22:47.530 [2024-11-27 04:39:43.934953] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:47.530 [2024-11-27 04:39:43.934959] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:22:47.530 [2024-11-27 04:39:43.934966] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:22:47.530 [2024-11-27 04:39:43.934975] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:47.530 [2024-11-27 04:39:43.934982] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:47.530 [2024-11-27 04:39:43.934989] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:22:47.530 [2024-11-27 04:39:43.934995] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:22:47.530 [2024-11-27 04:39:43.935001] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:22:47.530 [2024-11-27 04:39:43.935008] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:22:47.530 [2024-11-27 04:39:43.935014] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:22:47.530 [2024-11-27 04:39:43.935021] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:22:47.530 [2024-11-27 04:39:43.935029] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:22:47.530 [2024-11-27 04:39:43.935038] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:47.530 [2024-11-27 04:39:43.935048] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:22:47.530 [2024-11-27 04:39:43.935055] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:22:47.530 [2024-11-27 04:39:43.935062] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:22:47.530 [2024-11-27 04:39:43.935069] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:22:47.530 [2024-11-27 04:39:43.935076] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:22:47.530 [2024-11-27 04:39:43.935083] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:22:47.530 [2024-11-27 04:39:43.935089] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:22:47.530 [2024-11-27 04:39:43.935096] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:22:47.530 [2024-11-27 04:39:43.935103] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:22:47.530 [2024-11-27 04:39:43.935110] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:22:47.530 [2024-11-27 04:39:43.935116] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:22:47.530 [2024-11-27 04:39:43.935123] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:22:47.530 [2024-11-27 04:39:43.935130] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:22:47.530 [2024-11-27 04:39:43.935137] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:22:47.530 [2024-11-27 04:39:43.935144] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:22:47.531 [2024-11-27 04:39:43.935152] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:47.531 [2024-11-27 04:39:43.935159] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:47.531 [2024-11-27 04:39:43.935166] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:22:47.531 [2024-11-27 04:39:43.935173] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:22:47.531 [2024-11-27 04:39:43.935179] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:22:47.531 [2024-11-27 04:39:43.935186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:47.531 [2024-11-27 04:39:43.935193] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:22:47.531 [2024-11-27 04:39:43.935201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.621 ms 00:22:47.531 [2024-11-27 04:39:43.935208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.531 [2024-11-27 04:39:43.960555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:47.531 [2024-11-27 04:39:43.960777] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:47.531 [2024-11-27 04:39:43.960794] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.305 ms 00:22:47.531 [2024-11-27 04:39:43.960807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.531 [2024-11-27 04:39:43.960901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:47.531 [2024-11-27 04:39:43.960910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:22:47.531 [2024-11-27 04:39:43.960918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:22:47.531 [2024-11-27 04:39:43.960925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.531 [2024-11-27 04:39:43.998498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:47.531 [2024-11-27 04:39:43.998658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:47.531 [2024-11-27 04:39:43.998677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.515 ms 00:22:47.531 [2024-11-27 04:39:43.998685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.531 [2024-11-27 04:39:43.998748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:47.531 [2024-11-27 04:39:43.998758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:47.531 [2024-11-27 04:39:43.998771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:22:47.531 [2024-11-27 04:39:43.998779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.531 [2024-11-27 04:39:43.999131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:47.531 [2024-11-27 04:39:43.999147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:47.531 [2024-11-27 04:39:43.999156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.295 ms 00:22:47.531 [2024-11-27 04:39:43.999164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.531 [2024-11-27 04:39:43.999281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:47.531 [2024-11-27 04:39:43.999290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:47.531 [2024-11-27 04:39:43.999298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.101 ms 00:22:47.531 [2024-11-27 04:39:43.999309] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.531 [2024-11-27 04:39:44.012126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:47.531 [2024-11-27 04:39:44.012156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:47.531 [2024-11-27 04:39:44.012168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.798 ms 00:22:47.531 [2024-11-27 04:39:44.012175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.531 [2024-11-27 04:39:44.024188] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:22:47.531 [2024-11-27 04:39:44.024222] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:22:47.531 [2024-11-27 04:39:44.024234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:47.531 [2024-11-27 04:39:44.024242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:22:47.531 [2024-11-27 04:39:44.024251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.950 ms 00:22:47.531 [2024-11-27 04:39:44.024257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.531 [2024-11-27 04:39:44.048091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:47.531 [2024-11-27 04:39:44.048251] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:22:47.531 [2024-11-27 04:39:44.048267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.795 ms 00:22:47.531 [2024-11-27 04:39:44.048275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.531 [2024-11-27 04:39:44.059627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:47.531 [2024-11-27 04:39:44.059755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:22:47.531 [2024-11-27 04:39:44.059770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.309 ms 00:22:47.531 [2024-11-27 04:39:44.059778] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.531 [2024-11-27 04:39:44.070874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:47.531 [2024-11-27 04:39:44.070986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:22:47.531 [2024-11-27 04:39:44.071001] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.068 ms 00:22:47.531 [2024-11-27 04:39:44.071008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.531 [2024-11-27 04:39:44.071609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:47.531 [2024-11-27 04:39:44.071628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:22:47.531 [2024-11-27 04:39:44.071638] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.520 ms 00:22:47.531 [2024-11-27 04:39:44.071647] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.791 [2024-11-27 04:39:44.125316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:47.791 [2024-11-27 04:39:44.125367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:22:47.791 [2024-11-27 04:39:44.125380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 53.651 ms 00:22:47.791 [2024-11-27 04:39:44.125392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.791 [2024-11-27 04:39:44.135680] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:22:47.791 [2024-11-27 04:39:44.138206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:47.791 [2024-11-27 04:39:44.138237] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:22:47.791 [2024-11-27 04:39:44.138249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.767 ms 00:22:47.791 [2024-11-27 04:39:44.138257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.791 [2024-11-27 04:39:44.138347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:47.791 [2024-11-27 04:39:44.138358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:22:47.791 [2024-11-27 04:39:44.138366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:22:47.791 [2024-11-27 04:39:44.138374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.791 [2024-11-27 04:39:44.138438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:47.791 [2024-11-27 04:39:44.138448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:22:47.791 [2024-11-27 04:39:44.138456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:22:47.791 [2024-11-27 04:39:44.138463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.791 [2024-11-27 04:39:44.138482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:47.791 [2024-11-27 04:39:44.138490] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:22:47.791 [2024-11-27 04:39:44.138497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:47.791 [2024-11-27 04:39:44.138504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.791 [2024-11-27 04:39:44.138534] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:22:47.791 [2024-11-27 04:39:44.138546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:47.791 [2024-11-27 04:39:44.138553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:22:47.791 [2024-11-27 04:39:44.138561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:22:47.791 [2024-11-27 04:39:44.138568] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.791 [2024-11-27 04:39:44.161423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:47.791 [2024-11-27 04:39:44.161457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:22:47.791 [2024-11-27 04:39:44.161468] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.838 ms 00:22:47.791 [2024-11-27 04:39:44.161480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.791 [2024-11-27 04:39:44.161547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:47.791 [2024-11-27 04:39:44.161556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:22:47.791 [2024-11-27 04:39:44.161564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:22:47.791 [2024-11-27 04:39:44.161571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.791 [2024-11-27 04:39:44.162853] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 251.156 ms, result 0 00:22:48.735  [2024-11-27T04:39:46.264Z] Copying: 46/1024 [MB] (46 MBps) [2024-11-27T04:39:47.205Z] Copying: 92/1024 [MB] (46 MBps) [2024-11-27T04:39:48.580Z] Copying: 143/1024 [MB] (50 MBps) [2024-11-27T04:39:49.516Z] Copying: 188/1024 [MB] (45 MBps) [2024-11-27T04:39:50.458Z] Copying: 233/1024 [MB] (44 MBps) [2024-11-27T04:39:51.402Z] Copying: 272/1024 [MB] (39 MBps) [2024-11-27T04:39:52.345Z] Copying: 317/1024 [MB] (44 MBps) [2024-11-27T04:39:53.288Z] Copying: 359/1024 [MB] (41 MBps) [2024-11-27T04:39:54.232Z] Copying: 406/1024 [MB] (47 MBps) [2024-11-27T04:39:55.618Z] Copying: 449/1024 [MB] (42 MBps) [2024-11-27T04:39:56.189Z] Copying: 496/1024 [MB] (46 MBps) [2024-11-27T04:39:57.187Z] Copying: 526/1024 [MB] (30 MBps) [2024-11-27T04:39:58.571Z] Copying: 568/1024 [MB] (41 MBps) [2024-11-27T04:39:59.513Z] Copying: 614/1024 [MB] (46 MBps) [2024-11-27T04:40:00.456Z] Copying: 660/1024 [MB] (45 MBps) [2024-11-27T04:40:01.396Z] Copying: 705/1024 [MB] (45 MBps) [2024-11-27T04:40:02.335Z] Copying: 751/1024 [MB] (45 MBps) [2024-11-27T04:40:03.275Z] Copying: 786/1024 [MB] (34 MBps) [2024-11-27T04:40:04.227Z] Copying: 831/1024 [MB] (45 MBps) [2024-11-27T04:40:05.610Z] Copying: 874/1024 [MB] (42 MBps) [2024-11-27T04:40:06.182Z] Copying: 921/1024 [MB] (46 MBps) [2024-11-27T04:40:07.571Z] Copying: 965/1024 [MB] (44 MBps) [2024-11-27T04:40:07.571Z] Copying: 1010/1024 [MB] (44 MBps) [2024-11-27T04:40:07.571Z] Copying: 1024/1024 [MB] (average 43 MBps)[2024-11-27 04:40:07.475442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:10.984 [2024-11-27 04:40:07.475562] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:23:10.984 [2024-11-27 04:40:07.475623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:23:10.984 [2024-11-27 04:40:07.475647] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:10.984 [2024-11-27 04:40:07.475681] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:10.984 [2024-11-27 04:40:07.478367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:10.984 [2024-11-27 04:40:07.478479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:23:10.984 [2024-11-27 04:40:07.478572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.589 ms 00:23:10.984 [2024-11-27 04:40:07.478627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:10.984 [2024-11-27 04:40:07.480053] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:10.984 [2024-11-27 04:40:07.480151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:23:10.984 [2024-11-27 04:40:07.480207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.390 ms 00:23:10.984 [2024-11-27 04:40:07.480229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:10.984 [2024-11-27 04:40:07.494275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:10.984 [2024-11-27 04:40:07.494396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:23:10.984 [2024-11-27 04:40:07.494453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.988 ms 00:23:10.984 [2024-11-27 04:40:07.494475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:10.984 [2024-11-27 04:40:07.500618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:10.984 [2024-11-27 04:40:07.500721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:23:10.984 [2024-11-27 04:40:07.500780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.100 ms 00:23:10.984 [2024-11-27 04:40:07.500801] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:10.984 [2024-11-27 04:40:07.524279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:10.984 [2024-11-27 04:40:07.524393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:23:10.984 [2024-11-27 04:40:07.524470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.385 ms 00:23:10.984 [2024-11-27 04:40:07.524492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:10.984 [2024-11-27 04:40:07.538774] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:10.984 [2024-11-27 04:40:07.538898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:23:10.984 [2024-11-27 04:40:07.538959] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.241 ms 00:23:10.984 [2024-11-27 04:40:07.538989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:10.984 [2024-11-27 04:40:07.539127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:10.984 [2024-11-27 04:40:07.539158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:23:10.984 [2024-11-27 04:40:07.539178] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.093 ms 00:23:10.984 [2024-11-27 04:40:07.539228] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:10.984 [2024-11-27 04:40:07.564550] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:10.984 [2024-11-27 04:40:07.564673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:23:10.984 [2024-11-27 04:40:07.564735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.293 ms 00:23:10.984 [2024-11-27 04:40:07.564758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.253 [2024-11-27 04:40:07.588579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:11.253 [2024-11-27 04:40:07.588689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:23:11.253 [2024-11-27 04:40:07.588791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.760 ms 00:23:11.253 [2024-11-27 04:40:07.588815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.253 [2024-11-27 04:40:07.612765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:11.254 [2024-11-27 04:40:07.612908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:23:11.254 [2024-11-27 04:40:07.612979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.910 ms 00:23:11.254 [2024-11-27 04:40:07.613013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.254 [2024-11-27 04:40:07.643207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:11.254 [2024-11-27 04:40:07.643379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:23:11.254 [2024-11-27 04:40:07.643511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.101 ms 00:23:11.254 [2024-11-27 04:40:07.643549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.254 [2024-11-27 04:40:07.643775] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:23:11.254 [2024-11-27 04:40:07.643828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:23:11.254 [2024-11-27 04:40:07.644025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:23:11.254 [2024-11-27 04:40:07.644077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:23:11.254 [2024-11-27 04:40:07.644247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:23:11.254 [2024-11-27 04:40:07.644298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:23:11.254 [2024-11-27 04:40:07.644445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:23:11.254 [2024-11-27 04:40:07.644461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:23:11.254 [2024-11-27 04:40:07.644473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:23:11.254 [2024-11-27 04:40:07.644486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:23:11.254 [2024-11-27 04:40:07.644498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:23:11.254 [2024-11-27 04:40:07.644510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:23:11.254 [2024-11-27 04:40:07.644522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:23:11.254 [2024-11-27 04:40:07.644534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:23:11.254 [2024-11-27 04:40:07.644547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:23:11.254 [2024-11-27 04:40:07.644559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:23:11.254 [2024-11-27 04:40:07.644571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:23:11.254 [2024-11-27 04:40:07.644584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:23:11.254 [2024-11-27 04:40:07.644596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:23:11.254 [2024-11-27 04:40:07.644609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:23:11.254 [2024-11-27 04:40:07.644620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:23:11.254 [2024-11-27 04:40:07.644633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:23:11.254 [2024-11-27 04:40:07.644645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:23:11.254 [2024-11-27 04:40:07.644656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:23:11.254 [2024-11-27 04:40:07.644668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:23:11.254 [2024-11-27 04:40:07.644679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:23:11.254 [2024-11-27 04:40:07.644691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:23:11.254 [2024-11-27 04:40:07.644703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:23:11.254 [2024-11-27 04:40:07.644715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:23:11.254 [2024-11-27 04:40:07.644749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:23:11.254 [2024-11-27 04:40:07.644762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:23:11.254 [2024-11-27 04:40:07.644775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:23:11.254 [2024-11-27 04:40:07.644787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:23:11.254 [2024-11-27 04:40:07.644800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:23:11.254 [2024-11-27 04:40:07.644812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:23:11.254 [2024-11-27 04:40:07.644824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:23:11.254 [2024-11-27 04:40:07.644837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:23:11.254 [2024-11-27 04:40:07.644849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:23:11.254 [2024-11-27 04:40:07.644862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:23:11.254 [2024-11-27 04:40:07.644874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:23:11.254 [2024-11-27 04:40:07.644887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:23:11.254 [2024-11-27 04:40:07.644910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:23:11.254 [2024-11-27 04:40:07.644922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:23:11.254 [2024-11-27 04:40:07.644934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:23:11.254 [2024-11-27 04:40:07.644947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:23:11.254 [2024-11-27 04:40:07.644959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:23:11.254 [2024-11-27 04:40:07.644972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:23:11.254 [2024-11-27 04:40:07.644984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:23:11.254 [2024-11-27 04:40:07.644995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:23:11.254 [2024-11-27 04:40:07.645007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:23:11.254 [2024-11-27 04:40:07.645019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:23:11.254 [2024-11-27 04:40:07.645032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:23:11.254 [2024-11-27 04:40:07.645044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:23:11.254 [2024-11-27 04:40:07.645056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:23:11.254 [2024-11-27 04:40:07.645068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:23:11.254 [2024-11-27 04:40:07.645080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:23:11.254 [2024-11-27 04:40:07.645092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:23:11.254 [2024-11-27 04:40:07.645105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:23:11.254 [2024-11-27 04:40:07.645117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:23:11.254 [2024-11-27 04:40:07.645128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:23:11.254 [2024-11-27 04:40:07.645140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:23:11.254 [2024-11-27 04:40:07.645153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:23:11.254 [2024-11-27 04:40:07.645164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:23:11.254 [2024-11-27 04:40:07.645176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:23:11.254 [2024-11-27 04:40:07.645189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:23:11.254 [2024-11-27 04:40:07.645201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:23:11.254 [2024-11-27 04:40:07.645213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:23:11.254 [2024-11-27 04:40:07.645225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:23:11.254 [2024-11-27 04:40:07.645237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:23:11.254 [2024-11-27 04:40:07.645249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:23:11.254 [2024-11-27 04:40:07.645261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:23:11.254 [2024-11-27 04:40:07.645273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:23:11.254 [2024-11-27 04:40:07.645285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:23:11.254 [2024-11-27 04:40:07.645297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:23:11.254 [2024-11-27 04:40:07.645309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:23:11.254 [2024-11-27 04:40:07.645321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:23:11.254 [2024-11-27 04:40:07.645336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:23:11.255 [2024-11-27 04:40:07.645348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:23:11.255 [2024-11-27 04:40:07.645360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:23:11.255 [2024-11-27 04:40:07.645372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:23:11.255 [2024-11-27 04:40:07.645384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:23:11.255 [2024-11-27 04:40:07.645396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:23:11.255 [2024-11-27 04:40:07.645408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:23:11.255 [2024-11-27 04:40:07.645420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:23:11.255 [2024-11-27 04:40:07.645432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:23:11.255 [2024-11-27 04:40:07.645445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:23:11.255 [2024-11-27 04:40:07.645456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:23:11.255 [2024-11-27 04:40:07.645469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:23:11.255 [2024-11-27 04:40:07.645481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:23:11.255 [2024-11-27 04:40:07.645493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:23:11.255 [2024-11-27 04:40:07.645505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:23:11.255 [2024-11-27 04:40:07.645517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:23:11.255 [2024-11-27 04:40:07.645528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:23:11.255 [2024-11-27 04:40:07.645541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:23:11.255 [2024-11-27 04:40:07.645554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:23:11.255 [2024-11-27 04:40:07.645566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:23:11.255 [2024-11-27 04:40:07.645577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:23:11.255 [2024-11-27 04:40:07.645589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:23:11.255 [2024-11-27 04:40:07.645601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:23:11.255 [2024-11-27 04:40:07.645614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:23:11.255 [2024-11-27 04:40:07.645626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:23:11.255 [2024-11-27 04:40:07.645649] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:23:11.255 [2024-11-27 04:40:07.645666] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 5a28c5f8-1fbc-4d83-88bc-ea3ff92fe155 00:23:11.255 [2024-11-27 04:40:07.645678] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:23:11.255 [2024-11-27 04:40:07.645690] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:23:11.255 [2024-11-27 04:40:07.645700] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:23:11.255 [2024-11-27 04:40:07.645712] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:23:11.255 [2024-11-27 04:40:07.645736] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:23:11.255 [2024-11-27 04:40:07.645756] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:23:11.255 [2024-11-27 04:40:07.645768] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:23:11.255 [2024-11-27 04:40:07.645779] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:23:11.255 [2024-11-27 04:40:07.645789] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:23:11.255 [2024-11-27 04:40:07.645801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:11.255 [2024-11-27 04:40:07.645813] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:23:11.255 [2024-11-27 04:40:07.645827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.028 ms 00:23:11.255 [2024-11-27 04:40:07.645838] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.255 [2024-11-27 04:40:07.666506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:11.255 [2024-11-27 04:40:07.666639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:23:11.255 [2024-11-27 04:40:07.666704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.620 ms 00:23:11.255 [2024-11-27 04:40:07.666764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.255 [2024-11-27 04:40:07.667300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:11.255 [2024-11-27 04:40:07.667407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:23:11.255 [2024-11-27 04:40:07.667482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.486 ms 00:23:11.255 [2024-11-27 04:40:07.667569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.255 [2024-11-27 04:40:07.700282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:11.255 [2024-11-27 04:40:07.700453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:11.255 [2024-11-27 04:40:07.700527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:11.255 [2024-11-27 04:40:07.700551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.255 [2024-11-27 04:40:07.700625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:11.255 [2024-11-27 04:40:07.700646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:11.255 [2024-11-27 04:40:07.700690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:11.255 [2024-11-27 04:40:07.700717] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.255 [2024-11-27 04:40:07.700800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:11.255 [2024-11-27 04:40:07.700825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:11.255 [2024-11-27 04:40:07.700844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:11.255 [2024-11-27 04:40:07.700987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.255 [2024-11-27 04:40:07.701015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:11.255 [2024-11-27 04:40:07.701034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:11.255 [2024-11-27 04:40:07.701053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:11.255 [2024-11-27 04:40:07.701165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.255 [2024-11-27 04:40:07.795944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:11.255 [2024-11-27 04:40:07.796148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:11.255 [2024-11-27 04:40:07.796204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:11.255 [2024-11-27 04:40:07.796227] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.558 [2024-11-27 04:40:07.877350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:11.558 [2024-11-27 04:40:07.877508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:11.558 [2024-11-27 04:40:07.877606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:11.558 [2024-11-27 04:40:07.877637] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.558 [2024-11-27 04:40:07.877740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:11.558 [2024-11-27 04:40:07.877809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:11.558 [2024-11-27 04:40:07.877832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:11.558 [2024-11-27 04:40:07.877850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.558 [2024-11-27 04:40:07.877924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:11.558 [2024-11-27 04:40:07.877949] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:11.558 [2024-11-27 04:40:07.877968] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:11.558 [2024-11-27 04:40:07.877987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.558 [2024-11-27 04:40:07.878132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:11.558 [2024-11-27 04:40:07.878160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:11.558 [2024-11-27 04:40:07.878180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:11.558 [2024-11-27 04:40:07.878235] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.558 [2024-11-27 04:40:07.878283] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:11.558 [2024-11-27 04:40:07.878305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:23:11.558 [2024-11-27 04:40:07.878325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:11.558 [2024-11-27 04:40:07.878377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.558 [2024-11-27 04:40:07.878425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:11.558 [2024-11-27 04:40:07.878451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:11.558 [2024-11-27 04:40:07.878471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:11.558 [2024-11-27 04:40:07.878488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.558 [2024-11-27 04:40:07.878573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:11.558 [2024-11-27 04:40:07.878598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:11.558 [2024-11-27 04:40:07.878618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:11.558 [2024-11-27 04:40:07.878635] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.558 [2024-11-27 04:40:07.878769] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 403.286 ms, result 0 00:23:12.998 00:23:12.998 00:23:12.998 04:40:09 ftl.ftl_restore -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:23:12.998 [2024-11-27 04:40:09.429847] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:23:12.998 [2024-11-27 04:40:09.429972] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77456 ] 00:23:13.259 [2024-11-27 04:40:09.591150] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:13.259 [2024-11-27 04:40:09.690252] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:13.519 [2024-11-27 04:40:09.948493] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:13.519 [2024-11-27 04:40:09.948561] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:13.519 [2024-11-27 04:40:10.102520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:13.519 [2024-11-27 04:40:10.102578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:23:13.519 [2024-11-27 04:40:10.102591] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:23:13.519 [2024-11-27 04:40:10.102600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:13.519 [2024-11-27 04:40:10.102646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:13.519 [2024-11-27 04:40:10.102658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:13.519 [2024-11-27 04:40:10.102667] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:23:13.519 [2024-11-27 04:40:10.102674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:13.519 [2024-11-27 04:40:10.102694] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:23:13.782 [2024-11-27 04:40:10.103403] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:23:13.782 [2024-11-27 04:40:10.103426] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:13.782 [2024-11-27 04:40:10.103434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:13.782 [2024-11-27 04:40:10.103443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.736 ms 00:23:13.782 [2024-11-27 04:40:10.103450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:13.782 [2024-11-27 04:40:10.104503] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:23:13.782 [2024-11-27 04:40:10.116755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:13.782 [2024-11-27 04:40:10.116790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:23:13.782 [2024-11-27 04:40:10.116802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.254 ms 00:23:13.782 [2024-11-27 04:40:10.116810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:13.782 [2024-11-27 04:40:10.116863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:13.782 [2024-11-27 04:40:10.116873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:23:13.782 [2024-11-27 04:40:10.116881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:23:13.782 [2024-11-27 04:40:10.116889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:13.782 [2024-11-27 04:40:10.121890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:13.782 [2024-11-27 04:40:10.122033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:13.782 [2024-11-27 04:40:10.122048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.937 ms 00:23:13.782 [2024-11-27 04:40:10.122060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:13.782 [2024-11-27 04:40:10.122132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:13.783 [2024-11-27 04:40:10.122141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:13.783 [2024-11-27 04:40:10.122150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:23:13.783 [2024-11-27 04:40:10.122157] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:13.783 [2024-11-27 04:40:10.122195] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:13.783 [2024-11-27 04:40:10.122205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:23:13.783 [2024-11-27 04:40:10.122213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:23:13.783 [2024-11-27 04:40:10.122220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:13.783 [2024-11-27 04:40:10.122243] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:13.783 [2024-11-27 04:40:10.125440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:13.783 [2024-11-27 04:40:10.125558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:13.783 [2024-11-27 04:40:10.125577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.202 ms 00:23:13.783 [2024-11-27 04:40:10.125585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:13.783 [2024-11-27 04:40:10.125614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:13.783 [2024-11-27 04:40:10.125622] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:23:13.783 [2024-11-27 04:40:10.125630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:23:13.783 [2024-11-27 04:40:10.125637] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:13.783 [2024-11-27 04:40:10.125657] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:23:13.783 [2024-11-27 04:40:10.125675] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:23:13.783 [2024-11-27 04:40:10.125709] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:23:13.783 [2024-11-27 04:40:10.125741] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:23:13.783 [2024-11-27 04:40:10.125844] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:23:13.783 [2024-11-27 04:40:10.125854] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:23:13.783 [2024-11-27 04:40:10.125864] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:23:13.783 [2024-11-27 04:40:10.125873] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:23:13.783 [2024-11-27 04:40:10.125882] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:23:13.783 [2024-11-27 04:40:10.125890] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:23:13.783 [2024-11-27 04:40:10.125897] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:23:13.783 [2024-11-27 04:40:10.125906] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:23:13.783 [2024-11-27 04:40:10.125913] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:23:13.783 [2024-11-27 04:40:10.125920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:13.783 [2024-11-27 04:40:10.125928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:23:13.783 [2024-11-27 04:40:10.125935] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.266 ms 00:23:13.783 [2024-11-27 04:40:10.125942] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:13.783 [2024-11-27 04:40:10.126023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:13.783 [2024-11-27 04:40:10.126030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:23:13.783 [2024-11-27 04:40:10.126038] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:23:13.783 [2024-11-27 04:40:10.126044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:13.783 [2024-11-27 04:40:10.126146] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:23:13.783 [2024-11-27 04:40:10.126156] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:23:13.783 [2024-11-27 04:40:10.126164] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:13.783 [2024-11-27 04:40:10.126171] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:13.783 [2024-11-27 04:40:10.126179] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:23:13.783 [2024-11-27 04:40:10.126186] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:23:13.783 [2024-11-27 04:40:10.126192] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:23:13.783 [2024-11-27 04:40:10.126198] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:23:13.783 [2024-11-27 04:40:10.126206] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:23:13.783 [2024-11-27 04:40:10.126213] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:13.783 [2024-11-27 04:40:10.126219] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:23:13.783 [2024-11-27 04:40:10.126226] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:23:13.783 [2024-11-27 04:40:10.126232] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:13.783 [2024-11-27 04:40:10.126244] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:23:13.783 [2024-11-27 04:40:10.126250] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:23:13.783 [2024-11-27 04:40:10.126257] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:13.783 [2024-11-27 04:40:10.126263] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:23:13.783 [2024-11-27 04:40:10.126270] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:23:13.783 [2024-11-27 04:40:10.126276] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:13.783 [2024-11-27 04:40:10.126283] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:23:13.783 [2024-11-27 04:40:10.126290] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:23:13.783 [2024-11-27 04:40:10.126296] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:13.783 [2024-11-27 04:40:10.126303] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:23:13.783 [2024-11-27 04:40:10.126309] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:23:13.783 [2024-11-27 04:40:10.126315] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:13.783 [2024-11-27 04:40:10.126321] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:23:13.783 [2024-11-27 04:40:10.126328] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:23:13.783 [2024-11-27 04:40:10.126334] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:13.783 [2024-11-27 04:40:10.126340] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:23:13.783 [2024-11-27 04:40:10.126347] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:23:13.783 [2024-11-27 04:40:10.126353] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:13.783 [2024-11-27 04:40:10.126360] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:23:13.783 [2024-11-27 04:40:10.126366] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:23:13.783 [2024-11-27 04:40:10.126372] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:13.783 [2024-11-27 04:40:10.126379] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:23:13.783 [2024-11-27 04:40:10.126385] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:23:13.784 [2024-11-27 04:40:10.126391] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:13.784 [2024-11-27 04:40:10.126398] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:23:13.784 [2024-11-27 04:40:10.126404] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:23:13.784 [2024-11-27 04:40:10.126410] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:13.784 [2024-11-27 04:40:10.126416] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:23:13.784 [2024-11-27 04:40:10.126422] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:23:13.784 [2024-11-27 04:40:10.126428] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:13.784 [2024-11-27 04:40:10.126435] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:23:13.784 [2024-11-27 04:40:10.126442] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:23:13.784 [2024-11-27 04:40:10.126450] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:13.784 [2024-11-27 04:40:10.126457] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:13.784 [2024-11-27 04:40:10.126464] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:23:13.784 [2024-11-27 04:40:10.126471] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:23:13.784 [2024-11-27 04:40:10.126477] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:23:13.784 [2024-11-27 04:40:10.126484] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:23:13.784 [2024-11-27 04:40:10.126491] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:23:13.784 [2024-11-27 04:40:10.126497] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:23:13.784 [2024-11-27 04:40:10.126505] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:23:13.784 [2024-11-27 04:40:10.126513] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:13.784 [2024-11-27 04:40:10.126523] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:23:13.784 [2024-11-27 04:40:10.126530] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:23:13.784 [2024-11-27 04:40:10.126537] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:23:13.784 [2024-11-27 04:40:10.126544] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:23:13.784 [2024-11-27 04:40:10.126551] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:23:13.784 [2024-11-27 04:40:10.126557] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:23:13.784 [2024-11-27 04:40:10.126564] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:23:13.784 [2024-11-27 04:40:10.126571] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:23:13.784 [2024-11-27 04:40:10.126578] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:23:13.784 [2024-11-27 04:40:10.126585] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:23:13.784 [2024-11-27 04:40:10.126591] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:23:13.784 [2024-11-27 04:40:10.126598] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:23:13.784 [2024-11-27 04:40:10.126605] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:23:13.784 [2024-11-27 04:40:10.126613] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:23:13.784 [2024-11-27 04:40:10.126619] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:23:13.784 [2024-11-27 04:40:10.126627] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:13.784 [2024-11-27 04:40:10.126635] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:13.784 [2024-11-27 04:40:10.126642] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:23:13.784 [2024-11-27 04:40:10.126649] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:23:13.784 [2024-11-27 04:40:10.126656] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:23:13.784 [2024-11-27 04:40:10.126663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:13.784 [2024-11-27 04:40:10.126670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:23:13.784 [2024-11-27 04:40:10.126677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.586 ms 00:23:13.784 [2024-11-27 04:40:10.126684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:13.784 [2024-11-27 04:40:10.152830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:13.784 [2024-11-27 04:40:10.152865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:13.784 [2024-11-27 04:40:10.152875] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.082 ms 00:23:13.784 [2024-11-27 04:40:10.152886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:13.784 [2024-11-27 04:40:10.152973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:13.784 [2024-11-27 04:40:10.152982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:23:13.784 [2024-11-27 04:40:10.152990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:23:13.784 [2024-11-27 04:40:10.152998] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:13.784 [2024-11-27 04:40:10.194754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:13.784 [2024-11-27 04:40:10.194803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:13.784 [2024-11-27 04:40:10.194815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.703 ms 00:23:13.784 [2024-11-27 04:40:10.194823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:13.784 [2024-11-27 04:40:10.194865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:13.784 [2024-11-27 04:40:10.194874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:13.784 [2024-11-27 04:40:10.194886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:23:13.784 [2024-11-27 04:40:10.194893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:13.784 [2024-11-27 04:40:10.195237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:13.784 [2024-11-27 04:40:10.195253] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:13.784 [2024-11-27 04:40:10.195262] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.282 ms 00:23:13.784 [2024-11-27 04:40:10.195270] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:13.784 [2024-11-27 04:40:10.195389] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:13.784 [2024-11-27 04:40:10.195397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:13.784 [2024-11-27 04:40:10.195408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.101 ms 00:23:13.784 [2024-11-27 04:40:10.195416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:13.784 [2024-11-27 04:40:10.208269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:13.784 [2024-11-27 04:40:10.208301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:13.784 [2024-11-27 04:40:10.208311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.833 ms 00:23:13.785 [2024-11-27 04:40:10.208319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:13.785 [2024-11-27 04:40:10.220419] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:23:13.785 [2024-11-27 04:40:10.220453] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:23:13.785 [2024-11-27 04:40:10.220465] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:13.785 [2024-11-27 04:40:10.220473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:23:13.785 [2024-11-27 04:40:10.220483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.060 ms 00:23:13.785 [2024-11-27 04:40:10.220490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:13.785 [2024-11-27 04:40:10.244682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:13.785 [2024-11-27 04:40:10.244853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:23:13.785 [2024-11-27 04:40:10.244870] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.154 ms 00:23:13.785 [2024-11-27 04:40:10.244878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:13.785 [2024-11-27 04:40:10.256112] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:13.785 [2024-11-27 04:40:10.256234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:23:13.785 [2024-11-27 04:40:10.256249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.168 ms 00:23:13.785 [2024-11-27 04:40:10.256256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:13.785 [2024-11-27 04:40:10.267384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:13.785 [2024-11-27 04:40:10.267492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:23:13.785 [2024-11-27 04:40:10.267508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.099 ms 00:23:13.785 [2024-11-27 04:40:10.267516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:13.785 [2024-11-27 04:40:10.268153] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:13.785 [2024-11-27 04:40:10.268168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:23:13.785 [2024-11-27 04:40:10.268179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.559 ms 00:23:13.785 [2024-11-27 04:40:10.268187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:13.785 [2024-11-27 04:40:10.321737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:13.785 [2024-11-27 04:40:10.321945] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:23:13.785 [2024-11-27 04:40:10.321970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 53.519 ms 00:23:13.785 [2024-11-27 04:40:10.321978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:13.785 [2024-11-27 04:40:10.332495] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:23:13.785 [2024-11-27 04:40:10.335054] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:13.785 [2024-11-27 04:40:10.335084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:23:13.785 [2024-11-27 04:40:10.335096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.037 ms 00:23:13.785 [2024-11-27 04:40:10.335104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:13.785 [2024-11-27 04:40:10.335196] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:13.785 [2024-11-27 04:40:10.335206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:23:13.785 [2024-11-27 04:40:10.335217] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:23:13.785 [2024-11-27 04:40:10.335225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:13.785 [2024-11-27 04:40:10.335288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:13.785 [2024-11-27 04:40:10.335298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:23:13.785 [2024-11-27 04:40:10.335305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:23:13.785 [2024-11-27 04:40:10.335313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:13.785 [2024-11-27 04:40:10.335331] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:13.785 [2024-11-27 04:40:10.335339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:23:13.785 [2024-11-27 04:40:10.335347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:23:13.785 [2024-11-27 04:40:10.335355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:13.785 [2024-11-27 04:40:10.335386] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:23:13.785 [2024-11-27 04:40:10.335396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:13.785 [2024-11-27 04:40:10.335403] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:23:13.785 [2024-11-27 04:40:10.335411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:23:13.785 [2024-11-27 04:40:10.335418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:13.785 [2024-11-27 04:40:10.358479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:13.785 [2024-11-27 04:40:10.358512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:23:13.785 [2024-11-27 04:40:10.358527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.045 ms 00:23:13.785 [2024-11-27 04:40:10.358535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:13.785 [2024-11-27 04:40:10.358602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:13.785 [2024-11-27 04:40:10.358612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:23:13.785 [2024-11-27 04:40:10.358620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:23:13.785 [2024-11-27 04:40:10.358627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:13.785 [2024-11-27 04:40:10.359479] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 256.563 ms, result 0 00:23:15.169  [2024-11-27T04:40:12.697Z] Copying: 43/1024 [MB] (43 MBps) [2024-11-27T04:40:13.637Z] Copying: 91/1024 [MB] (48 MBps) [2024-11-27T04:40:14.581Z] Copying: 139/1024 [MB] (47 MBps) [2024-11-27T04:40:15.964Z] Copying: 186/1024 [MB] (47 MBps) [2024-11-27T04:40:16.536Z] Copying: 233/1024 [MB] (47 MBps) [2024-11-27T04:40:17.969Z] Copying: 276/1024 [MB] (42 MBps) [2024-11-27T04:40:18.540Z] Copying: 323/1024 [MB] (46 MBps) [2024-11-27T04:40:19.926Z] Copying: 371/1024 [MB] (48 MBps) [2024-11-27T04:40:20.869Z] Copying: 422/1024 [MB] (51 MBps) [2024-11-27T04:40:21.810Z] Copying: 461/1024 [MB] (38 MBps) [2024-11-27T04:40:22.753Z] Copying: 505/1024 [MB] (43 MBps) [2024-11-27T04:40:23.697Z] Copying: 552/1024 [MB] (47 MBps) [2024-11-27T04:40:24.638Z] Copying: 602/1024 [MB] (50 MBps) [2024-11-27T04:40:25.579Z] Copying: 649/1024 [MB] (46 MBps) [2024-11-27T04:40:26.966Z] Copying: 694/1024 [MB] (44 MBps) [2024-11-27T04:40:27.564Z] Copying: 744/1024 [MB] (50 MBps) [2024-11-27T04:40:28.947Z] Copying: 793/1024 [MB] (49 MBps) [2024-11-27T04:40:29.887Z] Copying: 840/1024 [MB] (46 MBps) [2024-11-27T04:40:30.830Z] Copying: 874/1024 [MB] (34 MBps) [2024-11-27T04:40:31.773Z] Copying: 896/1024 [MB] (21 MBps) [2024-11-27T04:40:32.715Z] Copying: 913/1024 [MB] (17 MBps) [2024-11-27T04:40:33.695Z] Copying: 956/1024 [MB] (43 MBps) [2024-11-27T04:40:34.637Z] Copying: 974/1024 [MB] (17 MBps) [2024-11-27T04:40:35.580Z] Copying: 1008/1024 [MB] (34 MBps) [2024-11-27T04:40:36.996Z] Copying: 1024/1024 [MB] (average 41 MBps)[2024-11-27 04:40:36.774974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.409 [2024-11-27 04:40:36.775074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:23:40.409 [2024-11-27 04:40:36.775095] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:23:40.409 [2024-11-27 04:40:36.775108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.409 [2024-11-27 04:40:36.775145] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:40.409 [2024-11-27 04:40:36.782052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.409 [2024-11-27 04:40:36.782122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:23:40.409 [2024-11-27 04:40:36.782138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.881 ms 00:23:40.409 [2024-11-27 04:40:36.782150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.409 [2024-11-27 04:40:36.782587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.409 [2024-11-27 04:40:36.782602] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:23:40.409 [2024-11-27 04:40:36.782616] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.392 ms 00:23:40.409 [2024-11-27 04:40:36.782628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.409 [2024-11-27 04:40:36.789704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.409 [2024-11-27 04:40:36.789758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:23:40.409 [2024-11-27 04:40:36.789787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.053 ms 00:23:40.409 [2024-11-27 04:40:36.789806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.409 [2024-11-27 04:40:36.804305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.409 [2024-11-27 04:40:36.804365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:23:40.409 [2024-11-27 04:40:36.804383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.468 ms 00:23:40.409 [2024-11-27 04:40:36.804396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.409 [2024-11-27 04:40:36.845614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.409 [2024-11-27 04:40:36.845688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:23:40.409 [2024-11-27 04:40:36.845707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.074 ms 00:23:40.409 [2024-11-27 04:40:36.845718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.410 [2024-11-27 04:40:36.866639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.410 [2024-11-27 04:40:36.866702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:23:40.410 [2024-11-27 04:40:36.866720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.836 ms 00:23:40.410 [2024-11-27 04:40:36.866754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.410 [2024-11-27 04:40:36.867010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.410 [2024-11-27 04:40:36.867027] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:23:40.410 [2024-11-27 04:40:36.867041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.180 ms 00:23:40.410 [2024-11-27 04:40:36.867053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.410 [2024-11-27 04:40:36.897174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.410 [2024-11-27 04:40:36.897230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:23:40.410 [2024-11-27 04:40:36.897245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.092 ms 00:23:40.410 [2024-11-27 04:40:36.897254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.410 [2024-11-27 04:40:36.922201] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.410 [2024-11-27 04:40:36.922271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:23:40.410 [2024-11-27 04:40:36.922285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.896 ms 00:23:40.410 [2024-11-27 04:40:36.922293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.410 [2024-11-27 04:40:36.947029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.410 [2024-11-27 04:40:36.947084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:23:40.410 [2024-11-27 04:40:36.947096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.689 ms 00:23:40.410 [2024-11-27 04:40:36.947105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.410 [2024-11-27 04:40:36.971318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.410 [2024-11-27 04:40:36.972555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:23:40.410 [2024-11-27 04:40:36.972645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.133 ms 00:23:40.410 [2024-11-27 04:40:36.972684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.410 [2024-11-27 04:40:36.972852] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:23:40.410 [2024-11-27 04:40:36.973014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:23:40.410 [2024-11-27 04:40:36.973069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:23:40.410 [2024-11-27 04:40:36.973108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:23:40.410 [2024-11-27 04:40:36.973146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:23:40.410 [2024-11-27 04:40:36.973183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:23:40.410 [2024-11-27 04:40:36.973220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:23:40.410 [2024-11-27 04:40:36.973256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:23:40.410 [2024-11-27 04:40:36.973292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:23:40.410 [2024-11-27 04:40:36.973330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:23:40.410 [2024-11-27 04:40:36.973370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:23:40.410 [2024-11-27 04:40:36.973407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:23:40.410 [2024-11-27 04:40:36.973446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:23:40.410 [2024-11-27 04:40:36.973482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:23:40.410 [2024-11-27 04:40:36.973520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:23:40.410 [2024-11-27 04:40:36.973558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:23:40.410 [2024-11-27 04:40:36.973597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:23:40.410 [2024-11-27 04:40:36.973633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:23:40.410 [2024-11-27 04:40:36.973669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:23:40.410 [2024-11-27 04:40:36.973707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:23:40.410 [2024-11-27 04:40:36.973779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:23:40.410 [2024-11-27 04:40:36.973817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:23:40.410 [2024-11-27 04:40:36.973851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:23:40.410 [2024-11-27 04:40:36.973887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:23:40.410 [2024-11-27 04:40:36.973922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:23:40.410 [2024-11-27 04:40:36.973958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:23:40.410 [2024-11-27 04:40:36.973993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:23:40.410 [2024-11-27 04:40:36.974028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:23:40.410 [2024-11-27 04:40:36.974063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:23:40.410 [2024-11-27 04:40:36.974099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:23:40.410 [2024-11-27 04:40:36.974135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:23:40.410 [2024-11-27 04:40:36.974170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:23:40.410 [2024-11-27 04:40:36.974203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:23:40.410 [2024-11-27 04:40:36.974238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:23:40.410 [2024-11-27 04:40:36.974273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:23:40.410 [2024-11-27 04:40:36.974309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:23:40.410 [2024-11-27 04:40:36.974361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:23:40.410 [2024-11-27 04:40:36.974397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:23:40.410 [2024-11-27 04:40:36.974433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:23:40.410 [2024-11-27 04:40:36.974468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:23:40.410 [2024-11-27 04:40:36.974502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:23:40.410 [2024-11-27 04:40:36.974540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:23:40.410 [2024-11-27 04:40:36.974575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:23:40.410 [2024-11-27 04:40:36.974611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:23:40.410 [2024-11-27 04:40:36.974645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:23:40.410 [2024-11-27 04:40:36.974680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:23:40.410 [2024-11-27 04:40:36.974713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:23:40.410 [2024-11-27 04:40:36.974786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:23:40.410 [2024-11-27 04:40:36.974822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:23:40.410 [2024-11-27 04:40:36.974854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:23:40.410 [2024-11-27 04:40:36.974891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:23:40.410 [2024-11-27 04:40:36.974926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:23:40.410 [2024-11-27 04:40:36.974963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:23:40.410 [2024-11-27 04:40:36.974999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:23:40.410 [2024-11-27 04:40:36.975035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:23:40.410 [2024-11-27 04:40:36.975072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:23:40.410 [2024-11-27 04:40:36.975108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:23:40.410 [2024-11-27 04:40:36.975143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:23:40.410 [2024-11-27 04:40:36.975179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:23:40.410 [2024-11-27 04:40:36.975215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:23:40.410 [2024-11-27 04:40:36.975249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:23:40.410 [2024-11-27 04:40:36.975285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:23:40.410 [2024-11-27 04:40:36.975320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:23:40.410 [2024-11-27 04:40:36.975355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:23:40.410 [2024-11-27 04:40:36.975390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:23:40.410 [2024-11-27 04:40:36.975428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:23:40.410 [2024-11-27 04:40:36.975463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:23:40.410 [2024-11-27 04:40:36.975504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:23:40.410 [2024-11-27 04:40:36.975540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:23:40.410 [2024-11-27 04:40:36.975575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:23:40.410 [2024-11-27 04:40:36.975611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:23:40.410 [2024-11-27 04:40:36.975647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:23:40.410 [2024-11-27 04:40:36.975682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:23:40.410 [2024-11-27 04:40:36.975718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:23:40.410 [2024-11-27 04:40:36.976327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:23:40.410 [2024-11-27 04:40:36.976647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:23:40.410 [2024-11-27 04:40:36.976825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:23:40.410 [2024-11-27 04:40:36.977500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:23:40.410 [2024-11-27 04:40:36.977555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:23:40.410 [2024-11-27 04:40:36.977592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:23:40.410 [2024-11-27 04:40:36.977628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:23:40.410 [2024-11-27 04:40:36.977664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:23:40.410 [2024-11-27 04:40:36.977699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:23:40.410 [2024-11-27 04:40:36.977770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:23:40.410 [2024-11-27 04:40:36.977810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:23:40.410 [2024-11-27 04:40:36.977847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:23:40.410 [2024-11-27 04:40:36.977882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:23:40.410 [2024-11-27 04:40:36.977918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:23:40.410 [2024-11-27 04:40:36.977954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:23:40.410 [2024-11-27 04:40:36.977990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:23:40.410 [2024-11-27 04:40:36.978026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:23:40.410 [2024-11-27 04:40:36.978062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:23:40.410 [2024-11-27 04:40:36.978098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:23:40.410 [2024-11-27 04:40:36.978134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:23:40.410 [2024-11-27 04:40:36.978170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:23:40.410 [2024-11-27 04:40:36.978206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:23:40.410 [2024-11-27 04:40:36.978241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:23:40.410 [2024-11-27 04:40:36.978278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:23:40.410 [2024-11-27 04:40:36.978315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:23:40.410 [2024-11-27 04:40:36.978352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:23:40.410 [2024-11-27 04:40:36.978389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:23:40.410 [2024-11-27 04:40:36.978457] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:23:40.410 [2024-11-27 04:40:36.978492] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 5a28c5f8-1fbc-4d83-88bc-ea3ff92fe155 00:23:40.410 [2024-11-27 04:40:36.978527] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:23:40.410 [2024-11-27 04:40:36.978559] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:23:40.410 [2024-11-27 04:40:36.978593] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:23:40.410 [2024-11-27 04:40:36.978627] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:23:40.410 [2024-11-27 04:40:36.978683] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:23:40.410 [2024-11-27 04:40:36.978719] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:23:40.410 [2024-11-27 04:40:36.978782] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:23:40.410 [2024-11-27 04:40:36.978812] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:23:40.410 [2024-11-27 04:40:36.978843] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:23:40.410 [2024-11-27 04:40:36.978879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.410 [2024-11-27 04:40:36.978915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:23:40.410 [2024-11-27 04:40:36.978953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.030 ms 00:23:40.410 [2024-11-27 04:40:36.978996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.670 [2024-11-27 04:40:36.994188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.670 [2024-11-27 04:40:36.994235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:23:40.670 [2024-11-27 04:40:36.994247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.045 ms 00:23:40.670 [2024-11-27 04:40:36.994256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.670 [2024-11-27 04:40:36.994651] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.670 [2024-11-27 04:40:36.994661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:23:40.670 [2024-11-27 04:40:36.994677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.367 ms 00:23:40.670 [2024-11-27 04:40:36.994685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.670 [2024-11-27 04:40:37.031057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:40.670 [2024-11-27 04:40:37.031108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:40.670 [2024-11-27 04:40:37.031122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:40.670 [2024-11-27 04:40:37.031131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.670 [2024-11-27 04:40:37.031208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:40.670 [2024-11-27 04:40:37.031217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:40.670 [2024-11-27 04:40:37.031232] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:40.670 [2024-11-27 04:40:37.031241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.670 [2024-11-27 04:40:37.031316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:40.670 [2024-11-27 04:40:37.031327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:40.670 [2024-11-27 04:40:37.031336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:40.670 [2024-11-27 04:40:37.031344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.670 [2024-11-27 04:40:37.031360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:40.670 [2024-11-27 04:40:37.031376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:40.670 [2024-11-27 04:40:37.031385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:40.670 [2024-11-27 04:40:37.031394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.670 [2024-11-27 04:40:37.116331] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:40.670 [2024-11-27 04:40:37.116400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:40.670 [2024-11-27 04:40:37.116416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:40.670 [2024-11-27 04:40:37.116425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.670 [2024-11-27 04:40:37.186326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:40.670 [2024-11-27 04:40:37.186388] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:40.670 [2024-11-27 04:40:37.186409] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:40.670 [2024-11-27 04:40:37.186418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.670 [2024-11-27 04:40:37.186534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:40.670 [2024-11-27 04:40:37.186547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:40.670 [2024-11-27 04:40:37.186556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:40.670 [2024-11-27 04:40:37.186565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.670 [2024-11-27 04:40:37.186605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:40.670 [2024-11-27 04:40:37.186615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:40.670 [2024-11-27 04:40:37.186623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:40.670 [2024-11-27 04:40:37.186632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.670 [2024-11-27 04:40:37.186764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:40.670 [2024-11-27 04:40:37.186776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:40.670 [2024-11-27 04:40:37.186785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:40.670 [2024-11-27 04:40:37.186794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.670 [2024-11-27 04:40:37.186827] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:40.670 [2024-11-27 04:40:37.186837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:23:40.670 [2024-11-27 04:40:37.186845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:40.670 [2024-11-27 04:40:37.186853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.670 [2024-11-27 04:40:37.186914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:40.670 [2024-11-27 04:40:37.186924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:40.670 [2024-11-27 04:40:37.186933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:40.670 [2024-11-27 04:40:37.186942] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.670 [2024-11-27 04:40:37.186990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:40.670 [2024-11-27 04:40:37.187001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:40.670 [2024-11-27 04:40:37.187009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:40.671 [2024-11-27 04:40:37.187018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.671 [2024-11-27 04:40:37.187158] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 412.162 ms, result 0 00:23:41.612 00:23:41.612 00:23:41.612 04:40:38 ftl.ftl_restore -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:23:44.247 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:23:44.247 04:40:40 ftl.ftl_restore -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:23:44.247 [2024-11-27 04:40:40.402259] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:23:44.247 [2024-11-27 04:40:40.402439] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77775 ] 00:23:44.247 [2024-11-27 04:40:40.566759] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:44.247 [2024-11-27 04:40:40.705040] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:44.507 [2024-11-27 04:40:41.010743] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:44.507 [2024-11-27 04:40:41.010826] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:44.769 [2024-11-27 04:40:41.171983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:44.769 [2024-11-27 04:40:41.172068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:23:44.769 [2024-11-27 04:40:41.172084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:23:44.769 [2024-11-27 04:40:41.172093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:44.769 [2024-11-27 04:40:41.172154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:44.769 [2024-11-27 04:40:41.172168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:44.769 [2024-11-27 04:40:41.172178] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:23:44.769 [2024-11-27 04:40:41.172185] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:44.769 [2024-11-27 04:40:41.172207] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:23:44.769 [2024-11-27 04:40:41.173026] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:23:44.769 [2024-11-27 04:40:41.173050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:44.769 [2024-11-27 04:40:41.173060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:44.769 [2024-11-27 04:40:41.173070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.849 ms 00:23:44.769 [2024-11-27 04:40:41.173078] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:44.769 [2024-11-27 04:40:41.174851] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:23:44.769 [2024-11-27 04:40:41.189268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:44.769 [2024-11-27 04:40:41.189343] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:23:44.769 [2024-11-27 04:40:41.189359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.418 ms 00:23:44.769 [2024-11-27 04:40:41.189368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:44.769 [2024-11-27 04:40:41.189453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:44.770 [2024-11-27 04:40:41.189464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:23:44.770 [2024-11-27 04:40:41.189473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:23:44.770 [2024-11-27 04:40:41.189481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:44.770 [2024-11-27 04:40:41.197903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:44.770 [2024-11-27 04:40:41.197948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:44.770 [2024-11-27 04:40:41.197960] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.343 ms 00:23:44.770 [2024-11-27 04:40:41.197974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:44.770 [2024-11-27 04:40:41.198056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:44.770 [2024-11-27 04:40:41.198065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:44.770 [2024-11-27 04:40:41.198074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:23:44.770 [2024-11-27 04:40:41.198083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:44.770 [2024-11-27 04:40:41.198131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:44.770 [2024-11-27 04:40:41.198142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:23:44.770 [2024-11-27 04:40:41.198150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:23:44.770 [2024-11-27 04:40:41.198158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:44.770 [2024-11-27 04:40:41.198188] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:44.770 [2024-11-27 04:40:41.202448] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:44.770 [2024-11-27 04:40:41.202489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:44.770 [2024-11-27 04:40:41.202503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.268 ms 00:23:44.770 [2024-11-27 04:40:41.202511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:44.770 [2024-11-27 04:40:41.202548] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:44.770 [2024-11-27 04:40:41.202558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:23:44.770 [2024-11-27 04:40:41.202567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:23:44.770 [2024-11-27 04:40:41.202575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:44.770 [2024-11-27 04:40:41.202629] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:23:44.770 [2024-11-27 04:40:41.202654] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:23:44.770 [2024-11-27 04:40:41.202693] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:23:44.770 [2024-11-27 04:40:41.202713] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:23:44.770 [2024-11-27 04:40:41.202845] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:23:44.770 [2024-11-27 04:40:41.202857] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:23:44.770 [2024-11-27 04:40:41.202869] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:23:44.770 [2024-11-27 04:40:41.202880] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:23:44.770 [2024-11-27 04:40:41.202891] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:23:44.770 [2024-11-27 04:40:41.202899] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:23:44.770 [2024-11-27 04:40:41.202908] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:23:44.770 [2024-11-27 04:40:41.202919] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:23:44.770 [2024-11-27 04:40:41.202927] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:23:44.770 [2024-11-27 04:40:41.202936] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:44.770 [2024-11-27 04:40:41.202944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:23:44.770 [2024-11-27 04:40:41.202951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.309 ms 00:23:44.770 [2024-11-27 04:40:41.202958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:44.770 [2024-11-27 04:40:41.203042] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:44.770 [2024-11-27 04:40:41.203051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:23:44.770 [2024-11-27 04:40:41.203058] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:23:44.770 [2024-11-27 04:40:41.203065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:44.770 [2024-11-27 04:40:41.203176] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:23:44.770 [2024-11-27 04:40:41.203188] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:23:44.770 [2024-11-27 04:40:41.203197] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:44.770 [2024-11-27 04:40:41.203205] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:44.770 [2024-11-27 04:40:41.203214] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:23:44.770 [2024-11-27 04:40:41.203221] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:23:44.770 [2024-11-27 04:40:41.203228] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:23:44.770 [2024-11-27 04:40:41.203235] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:23:44.770 [2024-11-27 04:40:41.203243] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:23:44.770 [2024-11-27 04:40:41.203250] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:44.770 [2024-11-27 04:40:41.203257] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:23:44.770 [2024-11-27 04:40:41.203264] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:23:44.770 [2024-11-27 04:40:41.203271] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:44.770 [2024-11-27 04:40:41.203286] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:23:44.770 [2024-11-27 04:40:41.203293] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:23:44.770 [2024-11-27 04:40:41.203300] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:44.770 [2024-11-27 04:40:41.203307] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:23:44.770 [2024-11-27 04:40:41.203314] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:23:44.770 [2024-11-27 04:40:41.203320] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:44.770 [2024-11-27 04:40:41.203327] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:23:44.770 [2024-11-27 04:40:41.203335] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:23:44.770 [2024-11-27 04:40:41.203342] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:44.770 [2024-11-27 04:40:41.203349] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:23:44.770 [2024-11-27 04:40:41.203355] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:23:44.770 [2024-11-27 04:40:41.203362] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:44.770 [2024-11-27 04:40:41.203368] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:23:44.770 [2024-11-27 04:40:41.203375] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:23:44.770 [2024-11-27 04:40:41.203382] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:44.770 [2024-11-27 04:40:41.203389] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:23:44.770 [2024-11-27 04:40:41.203396] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:23:44.770 [2024-11-27 04:40:41.203402] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:44.770 [2024-11-27 04:40:41.203409] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:23:44.770 [2024-11-27 04:40:41.203416] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:23:44.770 [2024-11-27 04:40:41.203423] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:44.770 [2024-11-27 04:40:41.203430] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:23:44.770 [2024-11-27 04:40:41.203436] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:23:44.770 [2024-11-27 04:40:41.203443] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:44.770 [2024-11-27 04:40:41.203450] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:23:44.771 [2024-11-27 04:40:41.203457] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:23:44.771 [2024-11-27 04:40:41.203464] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:44.771 [2024-11-27 04:40:41.203471] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:23:44.771 [2024-11-27 04:40:41.203478] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:23:44.771 [2024-11-27 04:40:41.203484] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:44.771 [2024-11-27 04:40:41.203491] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:23:44.771 [2024-11-27 04:40:41.203498] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:23:44.771 [2024-11-27 04:40:41.203507] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:44.771 [2024-11-27 04:40:41.203515] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:44.771 [2024-11-27 04:40:41.203524] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:23:44.771 [2024-11-27 04:40:41.203532] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:23:44.771 [2024-11-27 04:40:41.203539] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:23:44.771 [2024-11-27 04:40:41.203546] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:23:44.771 [2024-11-27 04:40:41.203553] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:23:44.771 [2024-11-27 04:40:41.203561] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:23:44.771 [2024-11-27 04:40:41.203570] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:23:44.771 [2024-11-27 04:40:41.203579] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:44.771 [2024-11-27 04:40:41.203590] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:23:44.771 [2024-11-27 04:40:41.203598] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:23:44.771 [2024-11-27 04:40:41.203604] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:23:44.771 [2024-11-27 04:40:41.203612] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:23:44.771 [2024-11-27 04:40:41.203619] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:23:44.771 [2024-11-27 04:40:41.203626] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:23:44.771 [2024-11-27 04:40:41.203633] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:23:44.771 [2024-11-27 04:40:41.203641] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:23:44.771 [2024-11-27 04:40:41.203648] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:23:44.771 [2024-11-27 04:40:41.203655] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:23:44.771 [2024-11-27 04:40:41.203662] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:23:44.771 [2024-11-27 04:40:41.203669] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:23:44.771 [2024-11-27 04:40:41.203677] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:23:44.771 [2024-11-27 04:40:41.203684] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:23:44.771 [2024-11-27 04:40:41.203692] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:23:44.771 [2024-11-27 04:40:41.203701] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:44.771 [2024-11-27 04:40:41.203709] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:44.771 [2024-11-27 04:40:41.203717] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:23:44.771 [2024-11-27 04:40:41.203739] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:23:44.771 [2024-11-27 04:40:41.203746] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:23:44.771 [2024-11-27 04:40:41.203754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:44.771 [2024-11-27 04:40:41.203761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:23:44.771 [2024-11-27 04:40:41.203773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.647 ms 00:23:44.771 [2024-11-27 04:40:41.203782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:44.771 [2024-11-27 04:40:41.236147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:44.771 [2024-11-27 04:40:41.236204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:44.771 [2024-11-27 04:40:41.236216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.317 ms 00:23:44.771 [2024-11-27 04:40:41.236228] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:44.771 [2024-11-27 04:40:41.236317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:44.771 [2024-11-27 04:40:41.236327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:23:44.771 [2024-11-27 04:40:41.236336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:23:44.771 [2024-11-27 04:40:41.236344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:44.771 [2024-11-27 04:40:41.285575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:44.771 [2024-11-27 04:40:41.285636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:44.771 [2024-11-27 04:40:41.285651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 49.165 ms 00:23:44.771 [2024-11-27 04:40:41.285661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:44.771 [2024-11-27 04:40:41.285720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:44.771 [2024-11-27 04:40:41.285754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:44.771 [2024-11-27 04:40:41.285767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:23:44.771 [2024-11-27 04:40:41.285776] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:44.771 [2024-11-27 04:40:41.286402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:44.771 [2024-11-27 04:40:41.286435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:44.771 [2024-11-27 04:40:41.286446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.540 ms 00:23:44.771 [2024-11-27 04:40:41.286455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:44.771 [2024-11-27 04:40:41.286613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:44.771 [2024-11-27 04:40:41.286624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:44.771 [2024-11-27 04:40:41.286638] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.126 ms 00:23:44.771 [2024-11-27 04:40:41.286647] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:44.771 [2024-11-27 04:40:41.302354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:44.771 [2024-11-27 04:40:41.302568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:44.771 [2024-11-27 04:40:41.302587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.687 ms 00:23:44.771 [2024-11-27 04:40:41.302596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:44.771 [2024-11-27 04:40:41.316814] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:23:44.771 [2024-11-27 04:40:41.317025] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:23:44.771 [2024-11-27 04:40:41.317045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:44.771 [2024-11-27 04:40:41.317054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:23:44.771 [2024-11-27 04:40:41.317063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.296 ms 00:23:44.771 [2024-11-27 04:40:41.317072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:44.771 [2024-11-27 04:40:41.342623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:44.771 [2024-11-27 04:40:41.342674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:23:44.771 [2024-11-27 04:40:41.342687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.505 ms 00:23:44.771 [2024-11-27 04:40:41.342696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:45.032 [2024-11-27 04:40:41.355826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:45.032 [2024-11-27 04:40:41.356003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:23:45.032 [2024-11-27 04:40:41.356023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.047 ms 00:23:45.032 [2024-11-27 04:40:41.356032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:45.032 [2024-11-27 04:40:41.368387] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:45.032 [2024-11-27 04:40:41.368433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:23:45.032 [2024-11-27 04:40:41.368445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.316 ms 00:23:45.032 [2024-11-27 04:40:41.368452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:45.032 [2024-11-27 04:40:41.369196] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:45.032 [2024-11-27 04:40:41.369222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:23:45.032 [2024-11-27 04:40:41.369236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.630 ms 00:23:45.032 [2024-11-27 04:40:41.369244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:45.032 [2024-11-27 04:40:41.435146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:45.032 [2024-11-27 04:40:41.435422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:23:45.032 [2024-11-27 04:40:41.435459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 65.877 ms 00:23:45.032 [2024-11-27 04:40:41.435469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:45.032 [2024-11-27 04:40:41.447465] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:23:45.032 [2024-11-27 04:40:41.451015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:45.032 [2024-11-27 04:40:41.451063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:23:45.032 [2024-11-27 04:40:41.451078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.109 ms 00:23:45.032 [2024-11-27 04:40:41.451086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:45.032 [2024-11-27 04:40:41.451206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:45.032 [2024-11-27 04:40:41.451220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:23:45.032 [2024-11-27 04:40:41.451233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:23:45.032 [2024-11-27 04:40:41.451242] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:45.032 [2024-11-27 04:40:41.451317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:45.032 [2024-11-27 04:40:41.451327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:23:45.032 [2024-11-27 04:40:41.451336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:23:45.032 [2024-11-27 04:40:41.451344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:45.032 [2024-11-27 04:40:41.451365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:45.032 [2024-11-27 04:40:41.451375] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:23:45.032 [2024-11-27 04:40:41.451384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:23:45.032 [2024-11-27 04:40:41.451391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:45.032 [2024-11-27 04:40:41.451432] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:23:45.032 [2024-11-27 04:40:41.451443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:45.032 [2024-11-27 04:40:41.451452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:23:45.032 [2024-11-27 04:40:41.451461] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:23:45.032 [2024-11-27 04:40:41.451469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:45.032 [2024-11-27 04:40:41.477853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:45.032 [2024-11-27 04:40:41.478051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:23:45.032 [2024-11-27 04:40:41.478082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.363 ms 00:23:45.032 [2024-11-27 04:40:41.478091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:45.032 [2024-11-27 04:40:41.478180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:45.032 [2024-11-27 04:40:41.478192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:23:45.032 [2024-11-27 04:40:41.478201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:23:45.032 [2024-11-27 04:40:41.478210] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:45.032 [2024-11-27 04:40:41.479483] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 307.013 ms, result 0 00:23:45.975  [2024-11-27T04:40:43.571Z] Copying: 24/1024 [MB] (24 MBps) [2024-11-27T04:40:44.513Z] Copying: 70/1024 [MB] (45 MBps) [2024-11-27T04:40:45.892Z] Copying: 115/1024 [MB] (45 MBps) [2024-11-27T04:40:46.528Z] Copying: 160/1024 [MB] (45 MBps) [2024-11-27T04:40:47.917Z] Copying: 194/1024 [MB] (34 MBps) [2024-11-27T04:40:48.856Z] Copying: 214/1024 [MB] (19 MBps) [2024-11-27T04:40:49.799Z] Copying: 237/1024 [MB] (22 MBps) [2024-11-27T04:40:50.743Z] Copying: 281/1024 [MB] (43 MBps) [2024-11-27T04:40:51.686Z] Copying: 320/1024 [MB] (39 MBps) [2024-11-27T04:40:52.631Z] Copying: 366/1024 [MB] (46 MBps) [2024-11-27T04:40:53.573Z] Copying: 412/1024 [MB] (45 MBps) [2024-11-27T04:40:54.514Z] Copying: 457/1024 [MB] (45 MBps) [2024-11-27T04:40:55.899Z] Copying: 504/1024 [MB] (47 MBps) [2024-11-27T04:40:56.840Z] Copying: 550/1024 [MB] (45 MBps) [2024-11-27T04:40:57.774Z] Copying: 596/1024 [MB] (45 MBps) [2024-11-27T04:40:58.709Z] Copying: 635/1024 [MB] (38 MBps) [2024-11-27T04:40:59.718Z] Copying: 671/1024 [MB] (36 MBps) [2024-11-27T04:41:00.677Z] Copying: 717/1024 [MB] (45 MBps) [2024-11-27T04:41:01.610Z] Copying: 754/1024 [MB] (37 MBps) [2024-11-27T04:41:02.543Z] Copying: 799/1024 [MB] (44 MBps) [2024-11-27T04:41:03.916Z] Copying: 844/1024 [MB] (45 MBps) [2024-11-27T04:41:04.847Z] Copying: 889/1024 [MB] (45 MBps) [2024-11-27T04:41:05.778Z] Copying: 935/1024 [MB] (45 MBps) [2024-11-27T04:41:06.709Z] Copying: 982/1024 [MB] (46 MBps) [2024-11-27T04:41:07.642Z] Copying: 1023/1024 [MB] (41 MBps) [2024-11-27T04:41:07.642Z] Copying: 1024/1024 [MB] (average 39 MBps)[2024-11-27 04:41:07.445711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.055 [2024-11-27 04:41:07.445780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:24:11.055 [2024-11-27 04:41:07.445803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:24:11.055 [2024-11-27 04:41:07.445812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.055 [2024-11-27 04:41:07.447822] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:11.056 [2024-11-27 04:41:07.451357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.056 [2024-11-27 04:41:07.451392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:24:11.056 [2024-11-27 04:41:07.451403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.499 ms 00:24:11.056 [2024-11-27 04:41:07.451412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.056 [2024-11-27 04:41:07.463354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.056 [2024-11-27 04:41:07.463389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:24:11.056 [2024-11-27 04:41:07.463398] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.929 ms 00:24:11.056 [2024-11-27 04:41:07.463411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.056 [2024-11-27 04:41:07.480567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.056 [2024-11-27 04:41:07.480600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:24:11.056 [2024-11-27 04:41:07.480611] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.140 ms 00:24:11.056 [2024-11-27 04:41:07.480626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.056 [2024-11-27 04:41:07.486773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.056 [2024-11-27 04:41:07.486801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:24:11.056 [2024-11-27 04:41:07.486812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.121 ms 00:24:11.056 [2024-11-27 04:41:07.486825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.056 [2024-11-27 04:41:07.510027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.056 [2024-11-27 04:41:07.510177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:24:11.056 [2024-11-27 04:41:07.510194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.168 ms 00:24:11.056 [2024-11-27 04:41:07.510203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.056 [2024-11-27 04:41:07.524001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.056 [2024-11-27 04:41:07.524035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:24:11.056 [2024-11-27 04:41:07.524047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.765 ms 00:24:11.056 [2024-11-27 04:41:07.524055] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.056 [2024-11-27 04:41:07.575176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.056 [2024-11-27 04:41:07.575215] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:24:11.056 [2024-11-27 04:41:07.575225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 51.085 ms 00:24:11.056 [2024-11-27 04:41:07.575232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.056 [2024-11-27 04:41:07.598420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.056 [2024-11-27 04:41:07.598451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:24:11.056 [2024-11-27 04:41:07.598461] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.174 ms 00:24:11.056 [2024-11-27 04:41:07.598469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.056 [2024-11-27 04:41:07.621163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.056 [2024-11-27 04:41:07.621296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:24:11.056 [2024-11-27 04:41:07.621311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.663 ms 00:24:11.056 [2024-11-27 04:41:07.621319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.316 [2024-11-27 04:41:07.643815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.316 [2024-11-27 04:41:07.643936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:24:11.316 [2024-11-27 04:41:07.643950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.469 ms 00:24:11.316 [2024-11-27 04:41:07.643957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.316 [2024-11-27 04:41:07.665778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.316 [2024-11-27 04:41:07.665810] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:24:11.316 [2024-11-27 04:41:07.665821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.773 ms 00:24:11.316 [2024-11-27 04:41:07.665828] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.316 [2024-11-27 04:41:07.665859] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:24:11.316 [2024-11-27 04:41:07.665872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 120576 / 261120 wr_cnt: 1 state: open 00:24:11.316 [2024-11-27 04:41:07.665882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:24:11.316 [2024-11-27 04:41:07.665890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:24:11.316 [2024-11-27 04:41:07.665898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:24:11.316 [2024-11-27 04:41:07.665906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:24:11.316 [2024-11-27 04:41:07.665914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:24:11.316 [2024-11-27 04:41:07.665921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:24:11.316 [2024-11-27 04:41:07.665929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:24:11.316 [2024-11-27 04:41:07.665936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:24:11.316 [2024-11-27 04:41:07.665944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:24:11.316 [2024-11-27 04:41:07.665951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:24:11.316 [2024-11-27 04:41:07.665959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:24:11.316 [2024-11-27 04:41:07.665966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:24:11.316 [2024-11-27 04:41:07.665973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:24:11.316 [2024-11-27 04:41:07.665981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:24:11.316 [2024-11-27 04:41:07.665988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:24:11.316 [2024-11-27 04:41:07.665996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:24:11.316 [2024-11-27 04:41:07.666004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:24:11.316 [2024-11-27 04:41:07.666011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:24:11.316 [2024-11-27 04:41:07.666018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:24:11.316 [2024-11-27 04:41:07.666025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:24:11.316 [2024-11-27 04:41:07.666033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:24:11.316 [2024-11-27 04:41:07.666040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:24:11.316 [2024-11-27 04:41:07.666047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:24:11.316 [2024-11-27 04:41:07.666054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:24:11.316 [2024-11-27 04:41:07.666061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:24:11.316 [2024-11-27 04:41:07.666069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:24:11.316 [2024-11-27 04:41:07.666077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:24:11.316 [2024-11-27 04:41:07.666084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:24:11.316 [2024-11-27 04:41:07.666093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:24:11.316 [2024-11-27 04:41:07.666100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:24:11.316 [2024-11-27 04:41:07.666108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:24:11.316 [2024-11-27 04:41:07.666115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:24:11.316 [2024-11-27 04:41:07.666122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:24:11.316 [2024-11-27 04:41:07.666129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:24:11.316 [2024-11-27 04:41:07.666136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:24:11.316 [2024-11-27 04:41:07.666144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:24:11.316 [2024-11-27 04:41:07.666151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:24:11.316 [2024-11-27 04:41:07.666158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:24:11.316 [2024-11-27 04:41:07.666166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:24:11.316 [2024-11-27 04:41:07.666173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:24:11.316 [2024-11-27 04:41:07.666180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:24:11.316 [2024-11-27 04:41:07.666188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:24:11.316 [2024-11-27 04:41:07.666195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:24:11.316 [2024-11-27 04:41:07.666202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:24:11.316 [2024-11-27 04:41:07.666209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:24:11.316 [2024-11-27 04:41:07.666217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:24:11.316 [2024-11-27 04:41:07.666224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:24:11.316 [2024-11-27 04:41:07.666232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:24:11.316 [2024-11-27 04:41:07.666239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:24:11.316 [2024-11-27 04:41:07.666246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:24:11.317 [2024-11-27 04:41:07.666253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:24:11.317 [2024-11-27 04:41:07.666261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:24:11.317 [2024-11-27 04:41:07.666268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:24:11.317 [2024-11-27 04:41:07.666276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:24:11.317 [2024-11-27 04:41:07.666283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:24:11.317 [2024-11-27 04:41:07.666290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:24:11.317 [2024-11-27 04:41:07.666297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:24:11.317 [2024-11-27 04:41:07.666305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:24:11.317 [2024-11-27 04:41:07.666312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:24:11.317 [2024-11-27 04:41:07.666319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:24:11.317 [2024-11-27 04:41:07.666327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:24:11.317 [2024-11-27 04:41:07.666334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:24:11.317 [2024-11-27 04:41:07.666342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:24:11.317 [2024-11-27 04:41:07.666349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:24:11.317 [2024-11-27 04:41:07.666356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:24:11.317 [2024-11-27 04:41:07.666363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:24:11.317 [2024-11-27 04:41:07.666370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:24:11.317 [2024-11-27 04:41:07.666378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:24:11.317 [2024-11-27 04:41:07.666385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:24:11.317 [2024-11-27 04:41:07.666393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:24:11.317 [2024-11-27 04:41:07.666400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:24:11.317 [2024-11-27 04:41:07.666407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:24:11.317 [2024-11-27 04:41:07.666414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:24:11.317 [2024-11-27 04:41:07.666422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:24:11.317 [2024-11-27 04:41:07.666429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:24:11.317 [2024-11-27 04:41:07.666436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:24:11.317 [2024-11-27 04:41:07.666443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:24:11.317 [2024-11-27 04:41:07.666451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:24:11.317 [2024-11-27 04:41:07.666458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:24:11.317 [2024-11-27 04:41:07.666465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:24:11.317 [2024-11-27 04:41:07.666473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:24:11.317 [2024-11-27 04:41:07.666480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:24:11.317 [2024-11-27 04:41:07.666487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:24:11.317 [2024-11-27 04:41:07.666495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:24:11.317 [2024-11-27 04:41:07.666502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:24:11.317 [2024-11-27 04:41:07.666509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:24:11.317 [2024-11-27 04:41:07.666516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:24:11.317 [2024-11-27 04:41:07.666523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:24:11.317 [2024-11-27 04:41:07.666531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:24:11.317 [2024-11-27 04:41:07.666538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:24:11.317 [2024-11-27 04:41:07.666545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:24:11.317 [2024-11-27 04:41:07.666552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:24:11.317 [2024-11-27 04:41:07.666560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:24:11.317 [2024-11-27 04:41:07.666568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:24:11.317 [2024-11-27 04:41:07.666575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:24:11.317 [2024-11-27 04:41:07.666582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:24:11.317 [2024-11-27 04:41:07.666590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:24:11.317 [2024-11-27 04:41:07.666597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:24:11.317 [2024-11-27 04:41:07.666605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:24:11.317 [2024-11-27 04:41:07.666621] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:24:11.317 [2024-11-27 04:41:07.666628] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 5a28c5f8-1fbc-4d83-88bc-ea3ff92fe155 00:24:11.317 [2024-11-27 04:41:07.666636] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 120576 00:24:11.317 [2024-11-27 04:41:07.666642] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 121536 00:24:11.317 [2024-11-27 04:41:07.666649] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 120576 00:24:11.317 [2024-11-27 04:41:07.666657] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0080 00:24:11.317 [2024-11-27 04:41:07.666673] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:24:11.317 [2024-11-27 04:41:07.666680] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:24:11.317 [2024-11-27 04:41:07.666687] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:24:11.317 [2024-11-27 04:41:07.666694] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:24:11.317 [2024-11-27 04:41:07.666700] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:24:11.317 [2024-11-27 04:41:07.666708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.317 [2024-11-27 04:41:07.666715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:24:11.317 [2024-11-27 04:41:07.666738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.849 ms 00:24:11.317 [2024-11-27 04:41:07.666746] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.317 [2024-11-27 04:41:07.678934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.317 [2024-11-27 04:41:07.678964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:24:11.317 [2024-11-27 04:41:07.678977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.173 ms 00:24:11.317 [2024-11-27 04:41:07.678984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.317 [2024-11-27 04:41:07.679319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.317 [2024-11-27 04:41:07.679327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:24:11.317 [2024-11-27 04:41:07.679335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.319 ms 00:24:11.317 [2024-11-27 04:41:07.679342] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.317 [2024-11-27 04:41:07.712033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:11.318 [2024-11-27 04:41:07.712067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:11.318 [2024-11-27 04:41:07.712077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:11.318 [2024-11-27 04:41:07.712085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.318 [2024-11-27 04:41:07.712140] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:11.318 [2024-11-27 04:41:07.712152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:11.318 [2024-11-27 04:41:07.712159] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:11.318 [2024-11-27 04:41:07.712167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.318 [2024-11-27 04:41:07.712214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:11.318 [2024-11-27 04:41:07.712227] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:11.318 [2024-11-27 04:41:07.712234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:11.318 [2024-11-27 04:41:07.712241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.318 [2024-11-27 04:41:07.712255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:11.318 [2024-11-27 04:41:07.712263] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:11.318 [2024-11-27 04:41:07.712270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:11.318 [2024-11-27 04:41:07.712277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.318 [2024-11-27 04:41:07.790377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:11.318 [2024-11-27 04:41:07.790586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:11.318 [2024-11-27 04:41:07.790605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:11.318 [2024-11-27 04:41:07.790613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.318 [2024-11-27 04:41:07.854382] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:11.318 [2024-11-27 04:41:07.854558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:11.318 [2024-11-27 04:41:07.854609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:11.318 [2024-11-27 04:41:07.854632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.318 [2024-11-27 04:41:07.854694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:11.318 [2024-11-27 04:41:07.854716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:11.318 [2024-11-27 04:41:07.854762] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:11.318 [2024-11-27 04:41:07.854787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.318 [2024-11-27 04:41:07.854845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:11.318 [2024-11-27 04:41:07.854868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:11.318 [2024-11-27 04:41:07.854955] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:11.318 [2024-11-27 04:41:07.854979] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.318 [2024-11-27 04:41:07.855086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:11.318 [2024-11-27 04:41:07.855178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:11.318 [2024-11-27 04:41:07.855261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:11.318 [2024-11-27 04:41:07.855288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.318 [2024-11-27 04:41:07.855334] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:11.318 [2024-11-27 04:41:07.855356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:24:11.318 [2024-11-27 04:41:07.855375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:11.318 [2024-11-27 04:41:07.855393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.318 [2024-11-27 04:41:07.855436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:11.318 [2024-11-27 04:41:07.855499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:11.318 [2024-11-27 04:41:07.855521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:11.318 [2024-11-27 04:41:07.855539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.318 [2024-11-27 04:41:07.855595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:11.318 [2024-11-27 04:41:07.855618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:11.318 [2024-11-27 04:41:07.855637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:11.318 [2024-11-27 04:41:07.855752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.318 [2024-11-27 04:41:07.855875] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 413.869 ms, result 0 00:24:13.846 00:24:13.846 00:24:13.847 04:41:10 ftl.ftl_restore -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:24:13.847 [2024-11-27 04:41:10.237986] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:24:13.847 [2024-11-27 04:41:10.238251] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78071 ] 00:24:13.847 [2024-11-27 04:41:10.399768] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:14.104 [2024-11-27 04:41:10.499344] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:14.364 [2024-11-27 04:41:10.757358] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:14.364 [2024-11-27 04:41:10.757543] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:14.364 [2024-11-27 04:41:10.910769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.364 [2024-11-27 04:41:10.911035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:24:14.364 [2024-11-27 04:41:10.911183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:24:14.364 [2024-11-27 04:41:10.911295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.364 [2024-11-27 04:41:10.911516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.364 [2024-11-27 04:41:10.911670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:14.364 [2024-11-27 04:41:10.911813] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:24:14.364 [2024-11-27 04:41:10.911931] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.364 [2024-11-27 04:41:10.911998] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:24:14.364 [2024-11-27 04:41:10.912983] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:24:14.364 [2024-11-27 04:41:10.913105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.364 [2024-11-27 04:41:10.913183] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:14.364 [2024-11-27 04:41:10.913201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.117 ms 00:24:14.364 [2024-11-27 04:41:10.913215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.364 [2024-11-27 04:41:10.914373] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:24:14.364 [2024-11-27 04:41:10.926844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.364 [2024-11-27 04:41:10.926979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:24:14.364 [2024-11-27 04:41:10.927002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.472 ms 00:24:14.364 [2024-11-27 04:41:10.927015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.364 [2024-11-27 04:41:10.927088] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.364 [2024-11-27 04:41:10.927103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:24:14.364 [2024-11-27 04:41:10.927117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:24:14.364 [2024-11-27 04:41:10.927129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.364 [2024-11-27 04:41:10.932151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.364 [2024-11-27 04:41:10.932185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:14.364 [2024-11-27 04:41:10.932199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.933 ms 00:24:14.364 [2024-11-27 04:41:10.932214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.364 [2024-11-27 04:41:10.932305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.364 [2024-11-27 04:41:10.932319] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:14.364 [2024-11-27 04:41:10.932333] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:24:14.364 [2024-11-27 04:41:10.932345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.364 [2024-11-27 04:41:10.932406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.364 [2024-11-27 04:41:10.932421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:24:14.364 [2024-11-27 04:41:10.932435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:24:14.364 [2024-11-27 04:41:10.932447] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.364 [2024-11-27 04:41:10.932484] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:14.364 [2024-11-27 04:41:10.935919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.364 [2024-11-27 04:41:10.935951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:14.364 [2024-11-27 04:41:10.935968] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.444 ms 00:24:14.364 [2024-11-27 04:41:10.935979] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.364 [2024-11-27 04:41:10.936021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.364 [2024-11-27 04:41:10.936035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:24:14.364 [2024-11-27 04:41:10.936048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:24:14.364 [2024-11-27 04:41:10.936060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.364 [2024-11-27 04:41:10.936089] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:24:14.364 [2024-11-27 04:41:10.936115] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:24:14.364 [2024-11-27 04:41:10.936164] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:24:14.364 [2024-11-27 04:41:10.936191] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:24:14.364 [2024-11-27 04:41:10.936334] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:24:14.364 [2024-11-27 04:41:10.936351] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:24:14.364 [2024-11-27 04:41:10.936368] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:24:14.364 [2024-11-27 04:41:10.936384] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:24:14.365 [2024-11-27 04:41:10.936398] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:24:14.365 [2024-11-27 04:41:10.936412] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:24:14.365 [2024-11-27 04:41:10.936425] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:24:14.365 [2024-11-27 04:41:10.936440] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:24:14.365 [2024-11-27 04:41:10.936452] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:24:14.365 [2024-11-27 04:41:10.936465] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.365 [2024-11-27 04:41:10.936477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:24:14.365 [2024-11-27 04:41:10.936489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.378 ms 00:24:14.365 [2024-11-27 04:41:10.936501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.365 [2024-11-27 04:41:10.936617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.365 [2024-11-27 04:41:10.936631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:24:14.365 [2024-11-27 04:41:10.936644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.094 ms 00:24:14.365 [2024-11-27 04:41:10.936656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.365 [2024-11-27 04:41:10.936840] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:24:14.365 [2024-11-27 04:41:10.936859] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:24:14.365 [2024-11-27 04:41:10.936874] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:14.365 [2024-11-27 04:41:10.936887] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:14.365 [2024-11-27 04:41:10.936899] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:24:14.365 [2024-11-27 04:41:10.936911] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:24:14.365 [2024-11-27 04:41:10.936946] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:24:14.365 [2024-11-27 04:41:10.936958] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:24:14.365 [2024-11-27 04:41:10.936971] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:24:14.365 [2024-11-27 04:41:10.936982] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:14.365 [2024-11-27 04:41:10.936994] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:24:14.365 [2024-11-27 04:41:10.937006] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:24:14.365 [2024-11-27 04:41:10.937017] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:14.365 [2024-11-27 04:41:10.937037] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:24:14.365 [2024-11-27 04:41:10.937048] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:24:14.365 [2024-11-27 04:41:10.937060] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:14.365 [2024-11-27 04:41:10.937071] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:24:14.365 [2024-11-27 04:41:10.937083] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:24:14.365 [2024-11-27 04:41:10.937095] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:14.365 [2024-11-27 04:41:10.937107] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:24:14.365 [2024-11-27 04:41:10.937119] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:24:14.365 [2024-11-27 04:41:10.937130] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:14.365 [2024-11-27 04:41:10.937141] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:24:14.365 [2024-11-27 04:41:10.937153] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:24:14.365 [2024-11-27 04:41:10.937164] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:14.365 [2024-11-27 04:41:10.937175] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:24:14.365 [2024-11-27 04:41:10.937187] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:24:14.365 [2024-11-27 04:41:10.937198] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:14.365 [2024-11-27 04:41:10.937209] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:24:14.365 [2024-11-27 04:41:10.937221] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:24:14.365 [2024-11-27 04:41:10.937232] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:14.365 [2024-11-27 04:41:10.937243] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:24:14.365 [2024-11-27 04:41:10.937254] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:24:14.365 [2024-11-27 04:41:10.937266] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:14.365 [2024-11-27 04:41:10.937277] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:24:14.365 [2024-11-27 04:41:10.937288] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:24:14.365 [2024-11-27 04:41:10.937300] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:14.365 [2024-11-27 04:41:10.937311] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:24:14.365 [2024-11-27 04:41:10.937325] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:24:14.365 [2024-11-27 04:41:10.937337] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:14.365 [2024-11-27 04:41:10.937349] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:24:14.365 [2024-11-27 04:41:10.937360] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:24:14.365 [2024-11-27 04:41:10.937372] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:14.365 [2024-11-27 04:41:10.937383] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:24:14.365 [2024-11-27 04:41:10.937395] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:24:14.365 [2024-11-27 04:41:10.937407] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:14.365 [2024-11-27 04:41:10.937420] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:14.365 [2024-11-27 04:41:10.937432] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:24:14.365 [2024-11-27 04:41:10.937444] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:24:14.365 [2024-11-27 04:41:10.937456] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:24:14.365 [2024-11-27 04:41:10.937467] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:24:14.365 [2024-11-27 04:41:10.937479] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:24:14.365 [2024-11-27 04:41:10.937491] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:24:14.365 [2024-11-27 04:41:10.937505] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:24:14.365 [2024-11-27 04:41:10.937519] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:14.365 [2024-11-27 04:41:10.937537] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:24:14.365 [2024-11-27 04:41:10.937549] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:24:14.365 [2024-11-27 04:41:10.937562] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:24:14.365 [2024-11-27 04:41:10.937574] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:24:14.365 [2024-11-27 04:41:10.937586] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:24:14.365 [2024-11-27 04:41:10.937599] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:24:14.365 [2024-11-27 04:41:10.937611] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:24:14.366 [2024-11-27 04:41:10.937623] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:24:14.366 [2024-11-27 04:41:10.937636] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:24:14.366 [2024-11-27 04:41:10.937648] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:24:14.366 [2024-11-27 04:41:10.937661] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:24:14.366 [2024-11-27 04:41:10.937674] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:24:14.366 [2024-11-27 04:41:10.937686] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:24:14.366 [2024-11-27 04:41:10.937699] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:24:14.366 [2024-11-27 04:41:10.937711] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:24:14.366 [2024-11-27 04:41:10.937739] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:14.366 [2024-11-27 04:41:10.937754] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:14.366 [2024-11-27 04:41:10.937766] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:24:14.366 [2024-11-27 04:41:10.937779] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:24:14.366 [2024-11-27 04:41:10.937793] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:24:14.366 [2024-11-27 04:41:10.937806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.366 [2024-11-27 04:41:10.937819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:24:14.366 [2024-11-27 04:41:10.937832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.059 ms 00:24:14.366 [2024-11-27 04:41:10.937844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.624 [2024-11-27 04:41:10.963852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.624 [2024-11-27 04:41:10.963895] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:14.624 [2024-11-27 04:41:10.963911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.931 ms 00:24:14.624 [2024-11-27 04:41:10.963926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.624 [2024-11-27 04:41:10.964040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.624 [2024-11-27 04:41:10.964055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:24:14.624 [2024-11-27 04:41:10.964068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.079 ms 00:24:14.624 [2024-11-27 04:41:10.964080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.624 [2024-11-27 04:41:11.002350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.624 [2024-11-27 04:41:11.002400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:14.624 [2024-11-27 04:41:11.002417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.188 ms 00:24:14.624 [2024-11-27 04:41:11.002429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.624 [2024-11-27 04:41:11.002496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.624 [2024-11-27 04:41:11.002511] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:14.624 [2024-11-27 04:41:11.002528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:24:14.624 [2024-11-27 04:41:11.002539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.624 [2024-11-27 04:41:11.002961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.624 [2024-11-27 04:41:11.002986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:14.624 [2024-11-27 04:41:11.003000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.345 ms 00:24:14.624 [2024-11-27 04:41:11.003011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.624 [2024-11-27 04:41:11.003204] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.624 [2024-11-27 04:41:11.003231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:14.624 [2024-11-27 04:41:11.003250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.139 ms 00:24:14.624 [2024-11-27 04:41:11.003262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.624 [2024-11-27 04:41:11.016176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.624 [2024-11-27 04:41:11.016213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:14.624 [2024-11-27 04:41:11.016228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.883 ms 00:24:14.624 [2024-11-27 04:41:11.016239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.624 [2024-11-27 04:41:11.028460] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:24:14.624 [2024-11-27 04:41:11.028496] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:24:14.624 [2024-11-27 04:41:11.028512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.624 [2024-11-27 04:41:11.028523] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:24:14.625 [2024-11-27 04:41:11.028535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.148 ms 00:24:14.625 [2024-11-27 04:41:11.028546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.625 [2024-11-27 04:41:11.052505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.625 [2024-11-27 04:41:11.052653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:24:14.625 [2024-11-27 04:41:11.052677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.909 ms 00:24:14.625 [2024-11-27 04:41:11.052690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.625 [2024-11-27 04:41:11.064294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.625 [2024-11-27 04:41:11.064331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:24:14.625 [2024-11-27 04:41:11.064346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.571 ms 00:24:14.625 [2024-11-27 04:41:11.064357] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.625 [2024-11-27 04:41:11.075507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.625 [2024-11-27 04:41:11.075639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:24:14.625 [2024-11-27 04:41:11.075660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.108 ms 00:24:14.625 [2024-11-27 04:41:11.075671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.625 [2024-11-27 04:41:11.076367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.625 [2024-11-27 04:41:11.076393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:24:14.625 [2024-11-27 04:41:11.076409] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.566 ms 00:24:14.625 [2024-11-27 04:41:11.076420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.625 [2024-11-27 04:41:11.130655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.625 [2024-11-27 04:41:11.130872] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:24:14.625 [2024-11-27 04:41:11.130906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 54.207 ms 00:24:14.625 [2024-11-27 04:41:11.130917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.625 [2024-11-27 04:41:11.141712] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:24:14.625 [2024-11-27 04:41:11.144362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.625 [2024-11-27 04:41:11.144396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:24:14.625 [2024-11-27 04:41:11.144412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.389 ms 00:24:14.625 [2024-11-27 04:41:11.144424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.625 [2024-11-27 04:41:11.144553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.625 [2024-11-27 04:41:11.144570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:24:14.625 [2024-11-27 04:41:11.144587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:24:14.625 [2024-11-27 04:41:11.144600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.625 [2024-11-27 04:41:11.146074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.625 [2024-11-27 04:41:11.146111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:24:14.625 [2024-11-27 04:41:11.146125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.418 ms 00:24:14.625 [2024-11-27 04:41:11.146135] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.625 [2024-11-27 04:41:11.146179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.625 [2024-11-27 04:41:11.146193] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:24:14.625 [2024-11-27 04:41:11.146206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:24:14.625 [2024-11-27 04:41:11.146219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.625 [2024-11-27 04:41:11.146263] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:24:14.625 [2024-11-27 04:41:11.146279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.625 [2024-11-27 04:41:11.146292] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:24:14.625 [2024-11-27 04:41:11.146305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:24:14.625 [2024-11-27 04:41:11.146317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.625 [2024-11-27 04:41:11.169267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.625 [2024-11-27 04:41:11.169397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:24:14.625 [2024-11-27 04:41:11.169424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.926 ms 00:24:14.625 [2024-11-27 04:41:11.169436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.625 [2024-11-27 04:41:11.169524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.625 [2024-11-27 04:41:11.169541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:24:14.625 [2024-11-27 04:41:11.169554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:24:14.625 [2024-11-27 04:41:11.169567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.625 [2024-11-27 04:41:11.170501] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 259.357 ms, result 0 00:24:15.998  [2024-11-27T04:41:13.519Z] Copying: 44/1024 [MB] (44 MBps) [2024-11-27T04:41:14.485Z] Copying: 93/1024 [MB] (48 MBps) [2024-11-27T04:41:15.418Z] Copying: 140/1024 [MB] (47 MBps) [2024-11-27T04:41:16.356Z] Copying: 189/1024 [MB] (49 MBps) [2024-11-27T04:41:17.731Z] Copying: 237/1024 [MB] (48 MBps) [2024-11-27T04:41:18.666Z] Copying: 288/1024 [MB] (50 MBps) [2024-11-27T04:41:19.598Z] Copying: 335/1024 [MB] (47 MBps) [2024-11-27T04:41:20.532Z] Copying: 384/1024 [MB] (48 MBps) [2024-11-27T04:41:21.466Z] Copying: 431/1024 [MB] (47 MBps) [2024-11-27T04:41:22.400Z] Copying: 479/1024 [MB] (47 MBps) [2024-11-27T04:41:23.774Z] Copying: 521/1024 [MB] (42 MBps) [2024-11-27T04:41:24.707Z] Copying: 567/1024 [MB] (45 MBps) [2024-11-27T04:41:25.641Z] Copying: 616/1024 [MB] (49 MBps) [2024-11-27T04:41:26.575Z] Copying: 665/1024 [MB] (49 MBps) [2024-11-27T04:41:27.509Z] Copying: 713/1024 [MB] (47 MBps) [2024-11-27T04:41:28.442Z] Copying: 757/1024 [MB] (44 MBps) [2024-11-27T04:41:29.375Z] Copying: 804/1024 [MB] (47 MBps) [2024-11-27T04:41:30.748Z] Copying: 852/1024 [MB] (47 MBps) [2024-11-27T04:41:31.682Z] Copying: 899/1024 [MB] (47 MBps) [2024-11-27T04:41:32.615Z] Copying: 949/1024 [MB] (49 MBps) [2024-11-27T04:41:32.872Z] Copying: 997/1024 [MB] (48 MBps) [2024-11-27T04:41:33.130Z] Copying: 1024/1024 [MB] (average 47 MBps)[2024-11-27 04:41:32.971719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:36.543 [2024-11-27 04:41:32.971801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:24:36.543 [2024-11-27 04:41:32.971820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:24:36.543 [2024-11-27 04:41:32.971829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.543 [2024-11-27 04:41:32.971851] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:36.543 [2024-11-27 04:41:32.974502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:36.543 [2024-11-27 04:41:32.974681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:24:36.543 [2024-11-27 04:41:32.974698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.636 ms 00:24:36.543 [2024-11-27 04:41:32.974707] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.543 [2024-11-27 04:41:32.974947] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:36.543 [2024-11-27 04:41:32.974958] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:24:36.543 [2024-11-27 04:41:32.974966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.197 ms 00:24:36.543 [2024-11-27 04:41:32.974976] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.543 [2024-11-27 04:41:32.980574] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:36.543 [2024-11-27 04:41:32.980610] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:24:36.543 [2024-11-27 04:41:32.980621] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.582 ms 00:24:36.543 [2024-11-27 04:41:32.980631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.543 [2024-11-27 04:41:32.988466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:36.543 [2024-11-27 04:41:32.988499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:24:36.543 [2024-11-27 04:41:32.988512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.802 ms 00:24:36.543 [2024-11-27 04:41:32.988526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.543 [2024-11-27 04:41:33.012272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:36.543 [2024-11-27 04:41:33.012304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:24:36.543 [2024-11-27 04:41:33.012314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.683 ms 00:24:36.543 [2024-11-27 04:41:33.012322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.543 [2024-11-27 04:41:33.025865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:36.543 [2024-11-27 04:41:33.025995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:24:36.543 [2024-11-27 04:41:33.026011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.511 ms 00:24:36.543 [2024-11-27 04:41:33.026019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.543 [2024-11-27 04:41:33.078582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:36.543 [2024-11-27 04:41:33.078618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:24:36.543 [2024-11-27 04:41:33.078629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 52.529 ms 00:24:36.543 [2024-11-27 04:41:33.078637] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.543 [2024-11-27 04:41:33.101451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:36.543 [2024-11-27 04:41:33.101591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:24:36.543 [2024-11-27 04:41:33.101606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.800 ms 00:24:36.543 [2024-11-27 04:41:33.101614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.543 [2024-11-27 04:41:33.124078] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:36.543 [2024-11-27 04:41:33.124193] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:24:36.543 [2024-11-27 04:41:33.124208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.437 ms 00:24:36.543 [2024-11-27 04:41:33.124215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.802 [2024-11-27 04:41:33.146263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:36.802 [2024-11-27 04:41:33.146291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:24:36.802 [2024-11-27 04:41:33.146301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.021 ms 00:24:36.802 [2024-11-27 04:41:33.146308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.802 [2024-11-27 04:41:33.167902] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:36.802 [2024-11-27 04:41:33.167932] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:24:36.802 [2024-11-27 04:41:33.167941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.542 ms 00:24:36.802 [2024-11-27 04:41:33.167948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.802 [2024-11-27 04:41:33.167977] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:24:36.802 [2024-11-27 04:41:33.167990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 131072 / 261120 wr_cnt: 1 state: open 00:24:36.802 [2024-11-27 04:41:33.167999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:24:36.802 [2024-11-27 04:41:33.168007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:24:36.802 [2024-11-27 04:41:33.168015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:24:36.802 [2024-11-27 04:41:33.168022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:24:36.802 [2024-11-27 04:41:33.168029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:24:36.802 [2024-11-27 04:41:33.168036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:24:36.802 [2024-11-27 04:41:33.168044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:24:36.802 [2024-11-27 04:41:33.168051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:24:36.802 [2024-11-27 04:41:33.168059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:24:36.802 [2024-11-27 04:41:33.168066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:24:36.802 [2024-11-27 04:41:33.168073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:24:36.802 [2024-11-27 04:41:33.168080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:24:36.802 [2024-11-27 04:41:33.168087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:24:36.802 [2024-11-27 04:41:33.168094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:24:36.802 [2024-11-27 04:41:33.168102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:24:36.802 [2024-11-27 04:41:33.168109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:24:36.802 [2024-11-27 04:41:33.168116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:24:36.802 [2024-11-27 04:41:33.168123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:24:36.802 [2024-11-27 04:41:33.168130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:24:36.802 [2024-11-27 04:41:33.168138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:24:36.802 [2024-11-27 04:41:33.168145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:24:36.802 [2024-11-27 04:41:33.168152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:24:36.802 [2024-11-27 04:41:33.168159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:24:36.802 [2024-11-27 04:41:33.168166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:24:36.802 [2024-11-27 04:41:33.168173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:24:36.802 [2024-11-27 04:41:33.168182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:24:36.802 [2024-11-27 04:41:33.168189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:24:36.802 [2024-11-27 04:41:33.168196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:24:36.802 [2024-11-27 04:41:33.168204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:24:36.802 [2024-11-27 04:41:33.168211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:24:36.802 [2024-11-27 04:41:33.168219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:24:36.802 [2024-11-27 04:41:33.168226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:24:36.802 [2024-11-27 04:41:33.168233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:24:36.802 [2024-11-27 04:41:33.168240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:24:36.802 [2024-11-27 04:41:33.168247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:24:36.802 [2024-11-27 04:41:33.168254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:24:36.802 [2024-11-27 04:41:33.168261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:24:36.802 [2024-11-27 04:41:33.168269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:24:36.802 [2024-11-27 04:41:33.168276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:24:36.802 [2024-11-27 04:41:33.168283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:24:36.802 [2024-11-27 04:41:33.168291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:24:36.802 [2024-11-27 04:41:33.168298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:24:36.802 [2024-11-27 04:41:33.168305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:24:36.802 [2024-11-27 04:41:33.168312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:24:36.802 [2024-11-27 04:41:33.168319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:24:36.802 [2024-11-27 04:41:33.168326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:24:36.802 [2024-11-27 04:41:33.168333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:24:36.802 [2024-11-27 04:41:33.168340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:24:36.802 [2024-11-27 04:41:33.168347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:24:36.802 [2024-11-27 04:41:33.168355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:24:36.802 [2024-11-27 04:41:33.168362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:24:36.802 [2024-11-27 04:41:33.168369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:24:36.802 [2024-11-27 04:41:33.168376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:24:36.802 [2024-11-27 04:41:33.168383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:24:36.802 [2024-11-27 04:41:33.168390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:24:36.802 [2024-11-27 04:41:33.168397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:24:36.802 [2024-11-27 04:41:33.168404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:24:36.802 [2024-11-27 04:41:33.168411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:24:36.802 [2024-11-27 04:41:33.168419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:24:36.802 [2024-11-27 04:41:33.168426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:24:36.802 [2024-11-27 04:41:33.168434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:24:36.802 [2024-11-27 04:41:33.168441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:24:36.802 [2024-11-27 04:41:33.168449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:24:36.802 [2024-11-27 04:41:33.168456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:24:36.802 [2024-11-27 04:41:33.168462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:24:36.802 [2024-11-27 04:41:33.168470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:24:36.802 [2024-11-27 04:41:33.168477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:24:36.803 [2024-11-27 04:41:33.168484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:24:36.803 [2024-11-27 04:41:33.168491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:24:36.803 [2024-11-27 04:41:33.168499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:24:36.803 [2024-11-27 04:41:33.168506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:24:36.803 [2024-11-27 04:41:33.168513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:24:36.803 [2024-11-27 04:41:33.168521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:24:36.803 [2024-11-27 04:41:33.168528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:24:36.803 [2024-11-27 04:41:33.168535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:24:36.803 [2024-11-27 04:41:33.168542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:24:36.803 [2024-11-27 04:41:33.168549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:24:36.803 [2024-11-27 04:41:33.168557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:24:36.803 [2024-11-27 04:41:33.168564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:24:36.803 [2024-11-27 04:41:33.168571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:24:36.803 [2024-11-27 04:41:33.168578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:24:36.803 [2024-11-27 04:41:33.168585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:24:36.803 [2024-11-27 04:41:33.168592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:24:36.803 [2024-11-27 04:41:33.168600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:24:36.803 [2024-11-27 04:41:33.168607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:24:36.803 [2024-11-27 04:41:33.168614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:24:36.803 [2024-11-27 04:41:33.168621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:24:36.803 [2024-11-27 04:41:33.168628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:24:36.803 [2024-11-27 04:41:33.168635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:24:36.803 [2024-11-27 04:41:33.168642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:24:36.803 [2024-11-27 04:41:33.168649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:24:36.803 [2024-11-27 04:41:33.168657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:24:36.803 [2024-11-27 04:41:33.168666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:24:36.803 [2024-11-27 04:41:33.168673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:24:36.803 [2024-11-27 04:41:33.168681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:24:36.803 [2024-11-27 04:41:33.168688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:24:36.803 [2024-11-27 04:41:33.168695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:24:36.803 [2024-11-27 04:41:33.168703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:24:36.803 [2024-11-27 04:41:33.168710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:24:36.803 [2024-11-27 04:41:33.168740] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:24:36.803 [2024-11-27 04:41:33.168748] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 5a28c5f8-1fbc-4d83-88bc-ea3ff92fe155 00:24:36.803 [2024-11-27 04:41:33.168756] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 131072 00:24:36.803 [2024-11-27 04:41:33.168763] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 11456 00:24:36.803 [2024-11-27 04:41:33.168771] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 10496 00:24:36.803 [2024-11-27 04:41:33.168778] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0915 00:24:36.803 [2024-11-27 04:41:33.168789] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:24:36.803 [2024-11-27 04:41:33.168802] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:24:36.803 [2024-11-27 04:41:33.168809] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:24:36.803 [2024-11-27 04:41:33.168815] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:24:36.803 [2024-11-27 04:41:33.168821] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:24:36.803 [2024-11-27 04:41:33.168828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:36.803 [2024-11-27 04:41:33.168835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:24:36.803 [2024-11-27 04:41:33.168844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.852 ms 00:24:36.803 [2024-11-27 04:41:33.168851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.803 [2024-11-27 04:41:33.181193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:36.803 [2024-11-27 04:41:33.181222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:24:36.803 [2024-11-27 04:41:33.181235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.327 ms 00:24:36.803 [2024-11-27 04:41:33.181243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.803 [2024-11-27 04:41:33.181574] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:36.803 [2024-11-27 04:41:33.181588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:24:36.803 [2024-11-27 04:41:33.181596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.315 ms 00:24:36.803 [2024-11-27 04:41:33.181603] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.803 [2024-11-27 04:41:33.213583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:36.803 [2024-11-27 04:41:33.213627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:36.803 [2024-11-27 04:41:33.213639] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:36.803 [2024-11-27 04:41:33.213647] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.803 [2024-11-27 04:41:33.213705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:36.803 [2024-11-27 04:41:33.213714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:36.803 [2024-11-27 04:41:33.213741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:36.803 [2024-11-27 04:41:33.213749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.803 [2024-11-27 04:41:33.213806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:36.803 [2024-11-27 04:41:33.213815] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:36.803 [2024-11-27 04:41:33.213827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:36.803 [2024-11-27 04:41:33.213834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.803 [2024-11-27 04:41:33.213849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:36.803 [2024-11-27 04:41:33.213857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:36.803 [2024-11-27 04:41:33.213864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:36.803 [2024-11-27 04:41:33.213872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.803 [2024-11-27 04:41:33.288898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:36.803 [2024-11-27 04:41:33.288959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:36.803 [2024-11-27 04:41:33.288971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:36.803 [2024-11-27 04:41:33.288979] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.803 [2024-11-27 04:41:33.350437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:36.803 [2024-11-27 04:41:33.350485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:36.803 [2024-11-27 04:41:33.350497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:36.803 [2024-11-27 04:41:33.350505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.803 [2024-11-27 04:41:33.350571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:36.803 [2024-11-27 04:41:33.350580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:36.803 [2024-11-27 04:41:33.350588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:36.803 [2024-11-27 04:41:33.350598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.803 [2024-11-27 04:41:33.350631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:36.803 [2024-11-27 04:41:33.350639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:36.803 [2024-11-27 04:41:33.350647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:36.803 [2024-11-27 04:41:33.350654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.803 [2024-11-27 04:41:33.350757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:36.803 [2024-11-27 04:41:33.350768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:36.803 [2024-11-27 04:41:33.350786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:36.803 [2024-11-27 04:41:33.350794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.803 [2024-11-27 04:41:33.350829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:36.803 [2024-11-27 04:41:33.350838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:24:36.803 [2024-11-27 04:41:33.350846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:36.803 [2024-11-27 04:41:33.350853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.803 [2024-11-27 04:41:33.350886] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:36.803 [2024-11-27 04:41:33.350895] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:36.803 [2024-11-27 04:41:33.350902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:36.803 [2024-11-27 04:41:33.350909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.803 [2024-11-27 04:41:33.350949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:36.803 [2024-11-27 04:41:33.350959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:36.803 [2024-11-27 04:41:33.350966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:36.803 [2024-11-27 04:41:33.350973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.804 [2024-11-27 04:41:33.351082] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 379.334 ms, result 0 00:24:37.737 00:24:37.737 00:24:37.737 04:41:34 ftl.ftl_restore -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:24:39.636 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:24:39.636 04:41:36 ftl.ftl_restore -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:24:39.636 04:41:36 ftl.ftl_restore -- ftl/restore.sh@85 -- # restore_kill 00:24:39.636 04:41:36 ftl.ftl_restore -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:24:39.893 04:41:36 ftl.ftl_restore -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:24:39.893 04:41:36 ftl.ftl_restore -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:24:39.893 Process with pid 76968 is not found 00:24:39.893 Remove shared memory files 00:24:39.893 04:41:36 ftl.ftl_restore -- ftl/restore.sh@32 -- # killprocess 76968 00:24:39.893 04:41:36 ftl.ftl_restore -- common/autotest_common.sh@954 -- # '[' -z 76968 ']' 00:24:39.893 04:41:36 ftl.ftl_restore -- common/autotest_common.sh@958 -- # kill -0 76968 00:24:39.893 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (76968) - No such process 00:24:39.893 04:41:36 ftl.ftl_restore -- common/autotest_common.sh@981 -- # echo 'Process with pid 76968 is not found' 00:24:39.893 04:41:36 ftl.ftl_restore -- ftl/restore.sh@33 -- # remove_shm 00:24:39.893 04:41:36 ftl.ftl_restore -- ftl/common.sh@204 -- # echo Remove shared memory files 00:24:39.893 04:41:36 ftl.ftl_restore -- ftl/common.sh@205 -- # rm -f rm -f 00:24:39.893 04:41:36 ftl.ftl_restore -- ftl/common.sh@206 -- # rm -f rm -f 00:24:39.893 04:41:36 ftl.ftl_restore -- ftl/common.sh@207 -- # rm -f rm -f 00:24:39.893 04:41:36 ftl.ftl_restore -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:24:39.893 04:41:36 ftl.ftl_restore -- ftl/common.sh@209 -- # rm -f rm -f 00:24:39.893 ************************************ 00:24:39.893 END TEST ftl_restore 00:24:39.893 ************************************ 00:24:39.893 00:24:39.893 real 2m14.632s 00:24:39.893 user 2m3.067s 00:24:39.893 sys 0m11.966s 00:24:39.893 04:41:36 ftl.ftl_restore -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:39.894 04:41:36 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:24:39.894 04:41:36 ftl -- ftl/ftl.sh@77 -- # run_test ftl_dirty_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:24:39.894 04:41:36 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:24:39.894 04:41:36 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:39.894 04:41:36 ftl -- common/autotest_common.sh@10 -- # set +x 00:24:39.894 ************************************ 00:24:39.894 START TEST ftl_dirty_shutdown 00:24:39.894 ************************************ 00:24:39.894 04:41:36 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:24:39.894 * Looking for test storage... 00:24:39.894 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:24:39.894 04:41:36 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:39.894 04:41:36 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1693 -- # lcov --version 00:24:39.894 04:41:36 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:39.894 04:41:36 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:39.894 04:41:36 ftl.ftl_dirty_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:39.894 04:41:36 ftl.ftl_dirty_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:39.894 04:41:36 ftl.ftl_dirty_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:39.894 04:41:36 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:24:39.894 04:41:36 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:24:39.894 04:41:36 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:24:39.894 04:41:36 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:24:39.894 04:41:36 ftl.ftl_dirty_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:24:39.894 04:41:36 ftl.ftl_dirty_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:24:39.894 04:41:36 ftl.ftl_dirty_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:24:39.894 04:41:36 ftl.ftl_dirty_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:39.894 04:41:36 ftl.ftl_dirty_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:24:39.894 04:41:36 ftl.ftl_dirty_shutdown -- scripts/common.sh@345 -- # : 1 00:24:39.894 04:41:36 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:39.894 04:41:36 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:39.894 04:41:36 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # decimal 1 00:24:40.152 04:41:36 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=1 00:24:40.152 04:41:36 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:40.152 04:41:36 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 1 00:24:40.152 04:41:36 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:24:40.152 04:41:36 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # decimal 2 00:24:40.152 04:41:36 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=2 00:24:40.152 04:41:36 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:40.152 04:41:36 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 2 00:24:40.152 04:41:36 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:24:40.152 04:41:36 ftl.ftl_dirty_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:40.152 04:41:36 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:40.152 04:41:36 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # return 0 00:24:40.152 04:41:36 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:40.152 04:41:36 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:40.152 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:40.152 --rc genhtml_branch_coverage=1 00:24:40.152 --rc genhtml_function_coverage=1 00:24:40.152 --rc genhtml_legend=1 00:24:40.152 --rc geninfo_all_blocks=1 00:24:40.152 --rc geninfo_unexecuted_blocks=1 00:24:40.152 00:24:40.152 ' 00:24:40.152 04:41:36 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:40.152 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:40.152 --rc genhtml_branch_coverage=1 00:24:40.152 --rc genhtml_function_coverage=1 00:24:40.152 --rc genhtml_legend=1 00:24:40.152 --rc geninfo_all_blocks=1 00:24:40.152 --rc geninfo_unexecuted_blocks=1 00:24:40.152 00:24:40.152 ' 00:24:40.152 04:41:36 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:40.152 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:40.152 --rc genhtml_branch_coverage=1 00:24:40.152 --rc genhtml_function_coverage=1 00:24:40.152 --rc genhtml_legend=1 00:24:40.152 --rc geninfo_all_blocks=1 00:24:40.152 --rc geninfo_unexecuted_blocks=1 00:24:40.152 00:24:40.152 ' 00:24:40.152 04:41:36 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:40.152 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:40.152 --rc genhtml_branch_coverage=1 00:24:40.152 --rc genhtml_function_coverage=1 00:24:40.152 --rc genhtml_legend=1 00:24:40.152 --rc geninfo_all_blocks=1 00:24:40.152 --rc geninfo_unexecuted_blocks=1 00:24:40.152 00:24:40.152 ' 00:24:40.152 04:41:36 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:24:40.152 04:41:36 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh 00:24:40.152 04:41:36 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:24:40.152 04:41:36 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:24:40.152 04:41:36 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:24:40.152 04:41:36 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:24:40.152 04:41:36 ftl.ftl_dirty_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:40.152 04:41:36 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:24:40.152 04:41:36 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:24:40.152 04:41:36 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:40.152 04:41:36 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:40.152 04:41:36 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:24:40.152 04:41:36 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:24:40.152 04:41:36 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:24:40.152 04:41:36 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:24:40.152 04:41:36 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:24:40.152 04:41:36 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:24:40.152 04:41:36 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:40.152 04:41:36 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:40.152 04:41:36 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:24:40.152 04:41:36 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:24:40.152 04:41:36 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:24:40.152 04:41:36 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:24:40.152 04:41:36 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:24:40.152 04:41:36 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:24:40.152 04:41:36 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:24:40.152 04:41:36 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:24:40.152 04:41:36 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:24:40.152 04:41:36 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:24:40.152 04:41:36 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:40.152 04:41:36 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@12 -- # spdk_dd=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:24:40.152 04:41:36 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:24:40.153 04:41:36 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@15 -- # case $opt in 00:24:40.153 04:41:36 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@17 -- # nv_cache=0000:00:10.0 00:24:40.153 04:41:36 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:24:40.153 04:41:36 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@21 -- # shift 2 00:24:40.153 04:41:36 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@23 -- # device=0000:00:11.0 00:24:40.153 04:41:36 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@24 -- # timeout=240 00:24:40.153 04:41:36 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@26 -- # block_size=4096 00:24:40.153 04:41:36 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@27 -- # chunk_size=262144 00:24:40.153 04:41:36 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@28 -- # data_size=262144 00:24:40.153 04:41:36 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@42 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:24:40.153 04:41:36 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@45 -- # svcpid=78409 00:24:40.153 04:41:36 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@47 -- # waitforlisten 78409 00:24:40.153 04:41:36 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:24:40.153 04:41:36 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@835 -- # '[' -z 78409 ']' 00:24:40.153 04:41:36 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:40.153 04:41:36 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:40.153 04:41:36 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:40.153 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:40.153 04:41:36 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:40.153 04:41:36 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:24:40.153 [2024-11-27 04:41:36.572896] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:24:40.153 [2024-11-27 04:41:36.573137] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78409 ] 00:24:40.153 [2024-11-27 04:41:36.725115] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:40.426 [2024-11-27 04:41:36.824334] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:40.991 04:41:37 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:40.991 04:41:37 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@868 -- # return 0 00:24:40.991 04:41:37 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:24:40.991 04:41:37 ftl.ftl_dirty_shutdown -- ftl/common.sh@54 -- # local name=nvme0 00:24:40.991 04:41:37 ftl.ftl_dirty_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:24:40.991 04:41:37 ftl.ftl_dirty_shutdown -- ftl/common.sh@56 -- # local size=103424 00:24:40.991 04:41:37 ftl.ftl_dirty_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:24:40.991 04:41:37 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:24:41.249 04:41:37 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:24:41.249 04:41:37 ftl.ftl_dirty_shutdown -- ftl/common.sh@62 -- # local base_size 00:24:41.249 04:41:37 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:24:41.249 04:41:37 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:24:41.249 04:41:37 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:24:41.249 04:41:37 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:24:41.249 04:41:37 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:24:41.249 04:41:37 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:24:41.508 04:41:37 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:24:41.508 { 00:24:41.508 "name": "nvme0n1", 00:24:41.508 "aliases": [ 00:24:41.508 "30c4aabf-c2a9-40eb-91de-ef8b60ff947f" 00:24:41.508 ], 00:24:41.508 "product_name": "NVMe disk", 00:24:41.508 "block_size": 4096, 00:24:41.508 "num_blocks": 1310720, 00:24:41.508 "uuid": "30c4aabf-c2a9-40eb-91de-ef8b60ff947f", 00:24:41.508 "numa_id": -1, 00:24:41.508 "assigned_rate_limits": { 00:24:41.508 "rw_ios_per_sec": 0, 00:24:41.508 "rw_mbytes_per_sec": 0, 00:24:41.508 "r_mbytes_per_sec": 0, 00:24:41.508 "w_mbytes_per_sec": 0 00:24:41.508 }, 00:24:41.508 "claimed": true, 00:24:41.508 "claim_type": "read_many_write_one", 00:24:41.508 "zoned": false, 00:24:41.508 "supported_io_types": { 00:24:41.508 "read": true, 00:24:41.508 "write": true, 00:24:41.508 "unmap": true, 00:24:41.508 "flush": true, 00:24:41.508 "reset": true, 00:24:41.508 "nvme_admin": true, 00:24:41.508 "nvme_io": true, 00:24:41.508 "nvme_io_md": false, 00:24:41.508 "write_zeroes": true, 00:24:41.508 "zcopy": false, 00:24:41.508 "get_zone_info": false, 00:24:41.508 "zone_management": false, 00:24:41.508 "zone_append": false, 00:24:41.508 "compare": true, 00:24:41.508 "compare_and_write": false, 00:24:41.508 "abort": true, 00:24:41.508 "seek_hole": false, 00:24:41.508 "seek_data": false, 00:24:41.508 "copy": true, 00:24:41.508 "nvme_iov_md": false 00:24:41.508 }, 00:24:41.508 "driver_specific": { 00:24:41.508 "nvme": [ 00:24:41.508 { 00:24:41.508 "pci_address": "0000:00:11.0", 00:24:41.508 "trid": { 00:24:41.508 "trtype": "PCIe", 00:24:41.508 "traddr": "0000:00:11.0" 00:24:41.508 }, 00:24:41.508 "ctrlr_data": { 00:24:41.508 "cntlid": 0, 00:24:41.508 "vendor_id": "0x1b36", 00:24:41.508 "model_number": "QEMU NVMe Ctrl", 00:24:41.508 "serial_number": "12341", 00:24:41.508 "firmware_revision": "8.0.0", 00:24:41.508 "subnqn": "nqn.2019-08.org.qemu:12341", 00:24:41.508 "oacs": { 00:24:41.508 "security": 0, 00:24:41.508 "format": 1, 00:24:41.508 "firmware": 0, 00:24:41.508 "ns_manage": 1 00:24:41.508 }, 00:24:41.508 "multi_ctrlr": false, 00:24:41.508 "ana_reporting": false 00:24:41.508 }, 00:24:41.508 "vs": { 00:24:41.508 "nvme_version": "1.4" 00:24:41.508 }, 00:24:41.508 "ns_data": { 00:24:41.508 "id": 1, 00:24:41.508 "can_share": false 00:24:41.508 } 00:24:41.508 } 00:24:41.508 ], 00:24:41.508 "mp_policy": "active_passive" 00:24:41.508 } 00:24:41.508 } 00:24:41.508 ]' 00:24:41.508 04:41:37 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:24:41.508 04:41:37 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:24:41.508 04:41:37 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:24:41.508 04:41:37 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=1310720 00:24:41.508 04:41:37 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:24:41.508 04:41:37 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 5120 00:24:41.508 04:41:37 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:24:41.508 04:41:37 ftl.ftl_dirty_shutdown -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:24:41.508 04:41:37 ftl.ftl_dirty_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:24:41.508 04:41:37 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:24:41.508 04:41:37 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:24:41.766 04:41:38 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # stores=9983f70d-3ef3-4af9-b230-95f9a1cd1683 00:24:41.766 04:41:38 ftl.ftl_dirty_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:24:41.766 04:41:38 ftl.ftl_dirty_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 9983f70d-3ef3-4af9-b230-95f9a1cd1683 00:24:42.023 04:41:38 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:24:42.281 04:41:38 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # lvs=917fb7f0-02d0-4a2a-a526-73e5d8d4696a 00:24:42.281 04:41:38 ftl.ftl_dirty_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 917fb7f0-02d0-4a2a-a526-73e5d8d4696a 00:24:42.281 04:41:38 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # split_bdev=e4928e96-45f3-4eb0-9598-8e586dc968c7 00:24:42.281 04:41:38 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@51 -- # '[' -n 0000:00:10.0 ']' 00:24:42.281 04:41:38 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # create_nv_cache_bdev nvc0 0000:00:10.0 e4928e96-45f3-4eb0-9598-8e586dc968c7 00:24:42.281 04:41:38 ftl.ftl_dirty_shutdown -- ftl/common.sh@35 -- # local name=nvc0 00:24:42.281 04:41:38 ftl.ftl_dirty_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:24:42.281 04:41:38 ftl.ftl_dirty_shutdown -- ftl/common.sh@37 -- # local base_bdev=e4928e96-45f3-4eb0-9598-8e586dc968c7 00:24:42.281 04:41:38 ftl.ftl_dirty_shutdown -- ftl/common.sh@38 -- # local cache_size= 00:24:42.281 04:41:38 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # get_bdev_size e4928e96-45f3-4eb0-9598-8e586dc968c7 00:24:42.281 04:41:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=e4928e96-45f3-4eb0-9598-8e586dc968c7 00:24:42.281 04:41:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:24:42.281 04:41:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:24:42.281 04:41:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:24:42.281 04:41:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b e4928e96-45f3-4eb0-9598-8e586dc968c7 00:24:42.539 04:41:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:24:42.539 { 00:24:42.539 "name": "e4928e96-45f3-4eb0-9598-8e586dc968c7", 00:24:42.539 "aliases": [ 00:24:42.539 "lvs/nvme0n1p0" 00:24:42.539 ], 00:24:42.539 "product_name": "Logical Volume", 00:24:42.539 "block_size": 4096, 00:24:42.539 "num_blocks": 26476544, 00:24:42.539 "uuid": "e4928e96-45f3-4eb0-9598-8e586dc968c7", 00:24:42.539 "assigned_rate_limits": { 00:24:42.539 "rw_ios_per_sec": 0, 00:24:42.539 "rw_mbytes_per_sec": 0, 00:24:42.539 "r_mbytes_per_sec": 0, 00:24:42.539 "w_mbytes_per_sec": 0 00:24:42.539 }, 00:24:42.539 "claimed": false, 00:24:42.539 "zoned": false, 00:24:42.539 "supported_io_types": { 00:24:42.539 "read": true, 00:24:42.539 "write": true, 00:24:42.539 "unmap": true, 00:24:42.539 "flush": false, 00:24:42.539 "reset": true, 00:24:42.539 "nvme_admin": false, 00:24:42.539 "nvme_io": false, 00:24:42.539 "nvme_io_md": false, 00:24:42.539 "write_zeroes": true, 00:24:42.539 "zcopy": false, 00:24:42.539 "get_zone_info": false, 00:24:42.539 "zone_management": false, 00:24:42.539 "zone_append": false, 00:24:42.539 "compare": false, 00:24:42.539 "compare_and_write": false, 00:24:42.539 "abort": false, 00:24:42.539 "seek_hole": true, 00:24:42.539 "seek_data": true, 00:24:42.539 "copy": false, 00:24:42.539 "nvme_iov_md": false 00:24:42.539 }, 00:24:42.539 "driver_specific": { 00:24:42.539 "lvol": { 00:24:42.539 "lvol_store_uuid": "917fb7f0-02d0-4a2a-a526-73e5d8d4696a", 00:24:42.539 "base_bdev": "nvme0n1", 00:24:42.539 "thin_provision": true, 00:24:42.539 "num_allocated_clusters": 0, 00:24:42.539 "snapshot": false, 00:24:42.539 "clone": false, 00:24:42.539 "esnap_clone": false 00:24:42.539 } 00:24:42.539 } 00:24:42.539 } 00:24:42.539 ]' 00:24:42.539 04:41:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:24:42.539 04:41:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:24:42.539 04:41:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:24:42.539 04:41:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:24:42.539 04:41:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:24:42.539 04:41:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:24:42.539 04:41:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # local base_size=5171 00:24:42.539 04:41:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:24:42.539 04:41:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:24:42.800 04:41:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:24:42.800 04:41:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@47 -- # [[ -z '' ]] 00:24:42.800 04:41:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # get_bdev_size e4928e96-45f3-4eb0-9598-8e586dc968c7 00:24:42.800 04:41:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=e4928e96-45f3-4eb0-9598-8e586dc968c7 00:24:42.800 04:41:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:24:42.800 04:41:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:24:42.800 04:41:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:24:42.800 04:41:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b e4928e96-45f3-4eb0-9598-8e586dc968c7 00:24:43.057 04:41:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:24:43.057 { 00:24:43.057 "name": "e4928e96-45f3-4eb0-9598-8e586dc968c7", 00:24:43.057 "aliases": [ 00:24:43.057 "lvs/nvme0n1p0" 00:24:43.057 ], 00:24:43.057 "product_name": "Logical Volume", 00:24:43.057 "block_size": 4096, 00:24:43.057 "num_blocks": 26476544, 00:24:43.058 "uuid": "e4928e96-45f3-4eb0-9598-8e586dc968c7", 00:24:43.058 "assigned_rate_limits": { 00:24:43.058 "rw_ios_per_sec": 0, 00:24:43.058 "rw_mbytes_per_sec": 0, 00:24:43.058 "r_mbytes_per_sec": 0, 00:24:43.058 "w_mbytes_per_sec": 0 00:24:43.058 }, 00:24:43.058 "claimed": false, 00:24:43.058 "zoned": false, 00:24:43.058 "supported_io_types": { 00:24:43.058 "read": true, 00:24:43.058 "write": true, 00:24:43.058 "unmap": true, 00:24:43.058 "flush": false, 00:24:43.058 "reset": true, 00:24:43.058 "nvme_admin": false, 00:24:43.058 "nvme_io": false, 00:24:43.058 "nvme_io_md": false, 00:24:43.058 "write_zeroes": true, 00:24:43.058 "zcopy": false, 00:24:43.058 "get_zone_info": false, 00:24:43.058 "zone_management": false, 00:24:43.058 "zone_append": false, 00:24:43.058 "compare": false, 00:24:43.058 "compare_and_write": false, 00:24:43.058 "abort": false, 00:24:43.058 "seek_hole": true, 00:24:43.058 "seek_data": true, 00:24:43.058 "copy": false, 00:24:43.058 "nvme_iov_md": false 00:24:43.058 }, 00:24:43.058 "driver_specific": { 00:24:43.058 "lvol": { 00:24:43.058 "lvol_store_uuid": "917fb7f0-02d0-4a2a-a526-73e5d8d4696a", 00:24:43.058 "base_bdev": "nvme0n1", 00:24:43.058 "thin_provision": true, 00:24:43.058 "num_allocated_clusters": 0, 00:24:43.058 "snapshot": false, 00:24:43.058 "clone": false, 00:24:43.058 "esnap_clone": false 00:24:43.058 } 00:24:43.058 } 00:24:43.058 } 00:24:43.058 ]' 00:24:43.058 04:41:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:24:43.058 04:41:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:24:43.058 04:41:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:24:43.058 04:41:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:24:43.058 04:41:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:24:43.058 04:41:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:24:43.058 04:41:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # cache_size=5171 00:24:43.058 04:41:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:24:43.315 04:41:39 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # nvc_bdev=nvc0n1p0 00:24:43.315 04:41:39 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # get_bdev_size e4928e96-45f3-4eb0-9598-8e586dc968c7 00:24:43.315 04:41:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=e4928e96-45f3-4eb0-9598-8e586dc968c7 00:24:43.315 04:41:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:24:43.315 04:41:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:24:43.315 04:41:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:24:43.315 04:41:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b e4928e96-45f3-4eb0-9598-8e586dc968c7 00:24:43.573 04:41:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:24:43.573 { 00:24:43.573 "name": "e4928e96-45f3-4eb0-9598-8e586dc968c7", 00:24:43.573 "aliases": [ 00:24:43.573 "lvs/nvme0n1p0" 00:24:43.573 ], 00:24:43.573 "product_name": "Logical Volume", 00:24:43.573 "block_size": 4096, 00:24:43.573 "num_blocks": 26476544, 00:24:43.573 "uuid": "e4928e96-45f3-4eb0-9598-8e586dc968c7", 00:24:43.573 "assigned_rate_limits": { 00:24:43.573 "rw_ios_per_sec": 0, 00:24:43.573 "rw_mbytes_per_sec": 0, 00:24:43.573 "r_mbytes_per_sec": 0, 00:24:43.573 "w_mbytes_per_sec": 0 00:24:43.573 }, 00:24:43.573 "claimed": false, 00:24:43.573 "zoned": false, 00:24:43.573 "supported_io_types": { 00:24:43.573 "read": true, 00:24:43.573 "write": true, 00:24:43.573 "unmap": true, 00:24:43.573 "flush": false, 00:24:43.573 "reset": true, 00:24:43.573 "nvme_admin": false, 00:24:43.573 "nvme_io": false, 00:24:43.573 "nvme_io_md": false, 00:24:43.573 "write_zeroes": true, 00:24:43.573 "zcopy": false, 00:24:43.573 "get_zone_info": false, 00:24:43.573 "zone_management": false, 00:24:43.573 "zone_append": false, 00:24:43.573 "compare": false, 00:24:43.573 "compare_and_write": false, 00:24:43.573 "abort": false, 00:24:43.573 "seek_hole": true, 00:24:43.573 "seek_data": true, 00:24:43.573 "copy": false, 00:24:43.573 "nvme_iov_md": false 00:24:43.573 }, 00:24:43.573 "driver_specific": { 00:24:43.573 "lvol": { 00:24:43.573 "lvol_store_uuid": "917fb7f0-02d0-4a2a-a526-73e5d8d4696a", 00:24:43.573 "base_bdev": "nvme0n1", 00:24:43.573 "thin_provision": true, 00:24:43.573 "num_allocated_clusters": 0, 00:24:43.573 "snapshot": false, 00:24:43.573 "clone": false, 00:24:43.573 "esnap_clone": false 00:24:43.573 } 00:24:43.573 } 00:24:43.573 } 00:24:43.573 ]' 00:24:43.573 04:41:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:24:43.573 04:41:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:24:43.573 04:41:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:24:43.573 04:41:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:24:43.573 04:41:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:24:43.573 04:41:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:24:43.573 04:41:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # l2p_dram_size_mb=10 00:24:43.573 04:41:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@56 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d e4928e96-45f3-4eb0-9598-8e586dc968c7 --l2p_dram_limit 10' 00:24:43.573 04:41:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@58 -- # '[' -n '' ']' 00:24:43.573 04:41:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # '[' -n 0000:00:10.0 ']' 00:24:43.573 04:41:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # ftl_construct_args+=' -c nvc0n1p0' 00:24:43.573 04:41:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d e4928e96-45f3-4eb0-9598-8e586dc968c7 --l2p_dram_limit 10 -c nvc0n1p0 00:24:43.832 [2024-11-27 04:41:40.280827] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:43.832 [2024-11-27 04:41:40.280984] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:24:43.832 [2024-11-27 04:41:40.281004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:24:43.832 [2024-11-27 04:41:40.281011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.832 [2024-11-27 04:41:40.281068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:43.832 [2024-11-27 04:41:40.281076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:43.832 [2024-11-27 04:41:40.281085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:24:43.832 [2024-11-27 04:41:40.281091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.832 [2024-11-27 04:41:40.281108] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:24:43.832 [2024-11-27 04:41:40.281741] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:24:43.832 [2024-11-27 04:41:40.281757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:43.832 [2024-11-27 04:41:40.281764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:43.832 [2024-11-27 04:41:40.281773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.652 ms 00:24:43.832 [2024-11-27 04:41:40.281779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.832 [2024-11-27 04:41:40.281834] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID b2275ce7-e06a-4d66-8d6c-890a04538f03 00:24:43.832 [2024-11-27 04:41:40.282788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:43.832 [2024-11-27 04:41:40.282817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:24:43.832 [2024-11-27 04:41:40.282825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:24:43.832 [2024-11-27 04:41:40.282836] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.832 [2024-11-27 04:41:40.287664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:43.832 [2024-11-27 04:41:40.287694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:43.832 [2024-11-27 04:41:40.287702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.796 ms 00:24:43.832 [2024-11-27 04:41:40.287710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.832 [2024-11-27 04:41:40.287784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:43.832 [2024-11-27 04:41:40.287794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:43.832 [2024-11-27 04:41:40.287801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:24:43.832 [2024-11-27 04:41:40.287812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.832 [2024-11-27 04:41:40.287852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:43.832 [2024-11-27 04:41:40.287861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:24:43.832 [2024-11-27 04:41:40.287869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:24:43.832 [2024-11-27 04:41:40.287877] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.832 [2024-11-27 04:41:40.287894] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:43.832 [2024-11-27 04:41:40.290896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:43.832 [2024-11-27 04:41:40.290975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:43.832 [2024-11-27 04:41:40.291048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.005 ms 00:24:43.832 [2024-11-27 04:41:40.291066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.832 [2024-11-27 04:41:40.291105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:43.833 [2024-11-27 04:41:40.291122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:24:43.833 [2024-11-27 04:41:40.291138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:24:43.833 [2024-11-27 04:41:40.291153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.833 [2024-11-27 04:41:40.291183] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:24:43.833 [2024-11-27 04:41:40.291368] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:24:43.833 [2024-11-27 04:41:40.291400] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:24:43.833 [2024-11-27 04:41:40.291426] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:24:43.833 [2024-11-27 04:41:40.291453] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:24:43.833 [2024-11-27 04:41:40.291476] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:24:43.833 [2024-11-27 04:41:40.291535] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:24:43.833 [2024-11-27 04:41:40.291555] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:24:43.833 [2024-11-27 04:41:40.291572] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:24:43.833 [2024-11-27 04:41:40.291586] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:24:43.833 [2024-11-27 04:41:40.291603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:43.833 [2024-11-27 04:41:40.291623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:24:43.833 [2024-11-27 04:41:40.291639] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.420 ms 00:24:43.833 [2024-11-27 04:41:40.291697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.833 [2024-11-27 04:41:40.291788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:43.833 [2024-11-27 04:41:40.291806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:24:43.833 [2024-11-27 04:41:40.291854] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:24:43.833 [2024-11-27 04:41:40.291871] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.833 [2024-11-27 04:41:40.291974] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:24:43.833 [2024-11-27 04:41:40.292034] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:24:43.833 [2024-11-27 04:41:40.292052] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:43.833 [2024-11-27 04:41:40.292120] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:43.833 [2024-11-27 04:41:40.292141] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:24:43.833 [2024-11-27 04:41:40.292156] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:24:43.833 [2024-11-27 04:41:40.292172] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:24:43.833 [2024-11-27 04:41:40.292186] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:24:43.833 [2024-11-27 04:41:40.292203] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:24:43.833 [2024-11-27 04:41:40.292217] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:43.833 [2024-11-27 04:41:40.292232] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:24:43.833 [2024-11-27 04:41:40.292276] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:24:43.833 [2024-11-27 04:41:40.292295] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:43.833 [2024-11-27 04:41:40.292309] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:24:43.833 [2024-11-27 04:41:40.292325] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:24:43.833 [2024-11-27 04:41:40.292339] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:43.833 [2024-11-27 04:41:40.292356] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:24:43.833 [2024-11-27 04:41:40.292371] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:24:43.833 [2024-11-27 04:41:40.292388] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:43.833 [2024-11-27 04:41:40.292424] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:24:43.833 [2024-11-27 04:41:40.292466] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:24:43.833 [2024-11-27 04:41:40.292520] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:43.833 [2024-11-27 04:41:40.292539] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:24:43.833 [2024-11-27 04:41:40.292554] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:24:43.833 [2024-11-27 04:41:40.292569] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:43.833 [2024-11-27 04:41:40.292607] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:24:43.833 [2024-11-27 04:41:40.292626] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:24:43.833 [2024-11-27 04:41:40.292641] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:43.833 [2024-11-27 04:41:40.292656] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:24:43.833 [2024-11-27 04:41:40.292671] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:24:43.833 [2024-11-27 04:41:40.292716] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:43.833 [2024-11-27 04:41:40.292745] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:24:43.833 [2024-11-27 04:41:40.292764] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:24:43.833 [2024-11-27 04:41:40.292859] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:43.833 [2024-11-27 04:41:40.292869] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:24:43.833 [2024-11-27 04:41:40.292875] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:24:43.833 [2024-11-27 04:41:40.292882] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:43.833 [2024-11-27 04:41:40.292887] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:24:43.833 [2024-11-27 04:41:40.292894] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:24:43.833 [2024-11-27 04:41:40.292899] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:43.833 [2024-11-27 04:41:40.292906] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:24:43.833 [2024-11-27 04:41:40.292911] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:24:43.833 [2024-11-27 04:41:40.292917] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:43.833 [2024-11-27 04:41:40.292922] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:24:43.833 [2024-11-27 04:41:40.292936] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:24:43.833 [2024-11-27 04:41:40.292942] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:43.833 [2024-11-27 04:41:40.292950] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:43.833 [2024-11-27 04:41:40.292956] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:24:43.833 [2024-11-27 04:41:40.292964] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:24:43.833 [2024-11-27 04:41:40.292970] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:24:43.833 [2024-11-27 04:41:40.292977] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:24:43.833 [2024-11-27 04:41:40.292982] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:24:43.833 [2024-11-27 04:41:40.292988] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:24:43.833 [2024-11-27 04:41:40.292996] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:24:43.833 [2024-11-27 04:41:40.293007] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:43.833 [2024-11-27 04:41:40.293014] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:24:43.833 [2024-11-27 04:41:40.293021] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:24:43.833 [2024-11-27 04:41:40.293027] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:24:43.833 [2024-11-27 04:41:40.293034] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:24:43.833 [2024-11-27 04:41:40.293039] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:24:43.833 [2024-11-27 04:41:40.293046] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:24:43.833 [2024-11-27 04:41:40.293052] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:24:43.833 [2024-11-27 04:41:40.293059] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:24:43.833 [2024-11-27 04:41:40.293065] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:24:43.833 [2024-11-27 04:41:40.293074] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:24:43.833 [2024-11-27 04:41:40.293080] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:24:43.833 [2024-11-27 04:41:40.293087] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:24:43.833 [2024-11-27 04:41:40.293093] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:24:43.833 [2024-11-27 04:41:40.293101] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:24:43.833 [2024-11-27 04:41:40.293107] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:24:43.833 [2024-11-27 04:41:40.293114] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:43.833 [2024-11-27 04:41:40.293120] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:43.833 [2024-11-27 04:41:40.293128] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:24:43.834 [2024-11-27 04:41:40.293133] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:24:43.834 [2024-11-27 04:41:40.293140] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:24:43.834 [2024-11-27 04:41:40.293147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:43.834 [2024-11-27 04:41:40.293154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:24:43.834 [2024-11-27 04:41:40.293160] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.228 ms 00:24:43.834 [2024-11-27 04:41:40.293167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.834 [2024-11-27 04:41:40.293202] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:24:43.834 [2024-11-27 04:41:40.293212] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:24:46.370 [2024-11-27 04:41:42.349269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.370 [2024-11-27 04:41:42.349488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:24:46.370 [2024-11-27 04:41:42.349568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2056.059 ms 00:24:46.370 [2024-11-27 04:41:42.349632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.370 [2024-11-27 04:41:42.375959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.370 [2024-11-27 04:41:42.376148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:46.370 [2024-11-27 04:41:42.376214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.094 ms 00:24:46.370 [2024-11-27 04:41:42.376240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.370 [2024-11-27 04:41:42.376421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.370 [2024-11-27 04:41:42.376455] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:24:46.370 [2024-11-27 04:41:42.376517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:24:46.370 [2024-11-27 04:41:42.376547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.370 [2024-11-27 04:41:42.407749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.370 [2024-11-27 04:41:42.407895] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:46.370 [2024-11-27 04:41:42.407952] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.134 ms 00:24:46.370 [2024-11-27 04:41:42.407967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.371 [2024-11-27 04:41:42.408006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.371 [2024-11-27 04:41:42.408017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:46.371 [2024-11-27 04:41:42.408026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:24:46.371 [2024-11-27 04:41:42.408041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.371 [2024-11-27 04:41:42.408382] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.371 [2024-11-27 04:41:42.408400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:46.371 [2024-11-27 04:41:42.408408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.294 ms 00:24:46.371 [2024-11-27 04:41:42.408417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.371 [2024-11-27 04:41:42.408523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.371 [2024-11-27 04:41:42.408536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:46.371 [2024-11-27 04:41:42.408544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.088 ms 00:24:46.371 [2024-11-27 04:41:42.408555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.371 [2024-11-27 04:41:42.422438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.371 [2024-11-27 04:41:42.422472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:46.371 [2024-11-27 04:41:42.422482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.862 ms 00:24:46.371 [2024-11-27 04:41:42.422491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.371 [2024-11-27 04:41:42.445575] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:24:46.371 [2024-11-27 04:41:42.448972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.371 [2024-11-27 04:41:42.449013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:24:46.371 [2024-11-27 04:41:42.449033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.406 ms 00:24:46.371 [2024-11-27 04:41:42.449046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.371 [2024-11-27 04:41:42.505210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.371 [2024-11-27 04:41:42.505255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:24:46.371 [2024-11-27 04:41:42.505269] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 56.113 ms 00:24:46.371 [2024-11-27 04:41:42.505278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.371 [2024-11-27 04:41:42.505459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.371 [2024-11-27 04:41:42.505470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:24:46.371 [2024-11-27 04:41:42.505483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.144 ms 00:24:46.371 [2024-11-27 04:41:42.505491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.371 [2024-11-27 04:41:42.528614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.371 [2024-11-27 04:41:42.528762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:24:46.371 [2024-11-27 04:41:42.528783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.079 ms 00:24:46.371 [2024-11-27 04:41:42.528791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.371 [2024-11-27 04:41:42.550960] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.371 [2024-11-27 04:41:42.550992] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:24:46.371 [2024-11-27 04:41:42.551005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.131 ms 00:24:46.371 [2024-11-27 04:41:42.551013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.371 [2024-11-27 04:41:42.551559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.371 [2024-11-27 04:41:42.551580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:24:46.371 [2024-11-27 04:41:42.551593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.512 ms 00:24:46.371 [2024-11-27 04:41:42.551600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.371 [2024-11-27 04:41:42.616539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.371 [2024-11-27 04:41:42.616681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:24:46.371 [2024-11-27 04:41:42.616704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 64.904 ms 00:24:46.371 [2024-11-27 04:41:42.616713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.371 [2024-11-27 04:41:42.640950] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.371 [2024-11-27 04:41:42.640985] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:24:46.371 [2024-11-27 04:41:42.640999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.149 ms 00:24:46.371 [2024-11-27 04:41:42.641008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.371 [2024-11-27 04:41:42.663845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.371 [2024-11-27 04:41:42.663878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:24:46.371 [2024-11-27 04:41:42.663891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.797 ms 00:24:46.371 [2024-11-27 04:41:42.663898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.371 [2024-11-27 04:41:42.686683] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.371 [2024-11-27 04:41:42.686717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:24:46.371 [2024-11-27 04:41:42.686744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.747 ms 00:24:46.371 [2024-11-27 04:41:42.686752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.371 [2024-11-27 04:41:42.686790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.371 [2024-11-27 04:41:42.686800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:24:46.371 [2024-11-27 04:41:42.686812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:24:46.371 [2024-11-27 04:41:42.686819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.371 [2024-11-27 04:41:42.686907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.371 [2024-11-27 04:41:42.686919] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:24:46.371 [2024-11-27 04:41:42.686929] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:24:46.371 [2024-11-27 04:41:42.686936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.371 [2024-11-27 04:41:42.687752] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2406.511 ms, result 0 00:24:46.371 { 00:24:46.371 "name": "ftl0", 00:24:46.371 "uuid": "b2275ce7-e06a-4d66-8d6c-890a04538f03" 00:24:46.371 } 00:24:46.371 04:41:42 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@64 -- # echo '{"subsystems": [' 00:24:46.371 04:41:42 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:24:46.371 04:41:42 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@66 -- # echo ']}' 00:24:46.371 04:41:42 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@70 -- # modprobe nbd 00:24:46.371 04:41:42 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_start_disk ftl0 /dev/nbd0 00:24:46.630 /dev/nbd0 00:24:46.630 04:41:43 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@72 -- # waitfornbd nbd0 00:24:46.630 04:41:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:24:46.630 04:41:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@873 -- # local i 00:24:46.630 04:41:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:24:46.630 04:41:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:24:46.630 04:41:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:24:46.630 04:41:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@877 -- # break 00:24:46.630 04:41:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:24:46.630 04:41:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:24:46.630 04:41:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/ftl/nbdtest bs=4096 count=1 iflag=direct 00:24:46.630 1+0 records in 00:24:46.630 1+0 records out 00:24:46.630 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000310048 s, 13.2 MB/s 00:24:46.630 04:41:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:24:46.630 04:41:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@890 -- # size=4096 00:24:46.630 04:41:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:24:46.630 04:41:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:24:46.630 04:41:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@893 -- # return 0 00:24:46.630 04:41:43 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --bs=4096 --count=262144 00:24:46.630 [2024-11-27 04:41:43.208137] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:24:46.630 [2024-11-27 04:41:43.208264] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78540 ] 00:24:46.889 [2024-11-27 04:41:43.368320] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:46.889 [2024-11-27 04:41:43.466907] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:48.265  [2024-11-27T04:41:45.785Z] Copying: 196/1024 [MB] (196 MBps) [2024-11-27T04:41:46.718Z] Copying: 393/1024 [MB] (197 MBps) [2024-11-27T04:41:48.091Z] Copying: 590/1024 [MB] (196 MBps) [2024-11-27T04:41:48.657Z] Copying: 830/1024 [MB] (240 MBps) [2024-11-27T04:41:49.222Z] Copying: 1024/1024 [MB] (average 214 MBps) 00:24:52.635 00:24:52.635 04:41:49 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@76 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:24:55.156 04:41:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@77 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --of=/dev/nbd0 --bs=4096 --count=262144 --oflag=direct 00:24:55.156 [2024-11-27 04:41:51.194075] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:24:55.156 [2024-11-27 04:41:51.194339] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78621 ] 00:24:55.156 [2024-11-27 04:41:51.350351] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:55.156 [2024-11-27 04:41:51.432994] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:56.089  [2024-11-27T04:41:54.050Z] Copying: 31/1024 [MB] (31 MBps) [2024-11-27T04:41:54.616Z] Copying: 61/1024 [MB] (29 MBps) [2024-11-27T04:41:56.010Z] Copying: 90/1024 [MB] (28 MBps) [2024-11-27T04:41:56.620Z] Copying: 121/1024 [MB] (30 MBps) [2024-11-27T04:41:57.994Z] Copying: 150/1024 [MB] (29 MBps) [2024-11-27T04:41:58.927Z] Copying: 179/1024 [MB] (28 MBps) [2024-11-27T04:41:59.862Z] Copying: 208/1024 [MB] (29 MBps) [2024-11-27T04:42:00.811Z] Copying: 236/1024 [MB] (27 MBps) [2024-11-27T04:42:01.744Z] Copying: 266/1024 [MB] (30 MBps) [2024-11-27T04:42:02.677Z] Copying: 301/1024 [MB] (35 MBps) [2024-11-27T04:42:04.049Z] Copying: 331/1024 [MB] (30 MBps) [2024-11-27T04:42:04.616Z] Copying: 362/1024 [MB] (30 MBps) [2024-11-27T04:42:05.985Z] Copying: 396/1024 [MB] (34 MBps) [2024-11-27T04:42:06.919Z] Copying: 426/1024 [MB] (29 MBps) [2024-11-27T04:42:07.853Z] Copying: 455/1024 [MB] (29 MBps) [2024-11-27T04:42:08.786Z] Copying: 487/1024 [MB] (31 MBps) [2024-11-27T04:42:09.719Z] Copying: 522/1024 [MB] (34 MBps) [2024-11-27T04:42:10.653Z] Copying: 557/1024 [MB] (35 MBps) [2024-11-27T04:42:12.040Z] Copying: 591/1024 [MB] (34 MBps) [2024-11-27T04:42:12.976Z] Copying: 627/1024 [MB] (35 MBps) [2024-11-27T04:42:13.908Z] Copying: 661/1024 [MB] (34 MBps) [2024-11-27T04:42:14.840Z] Copying: 697/1024 [MB] (35 MBps) [2024-11-27T04:42:15.772Z] Copying: 730/1024 [MB] (32 MBps) [2024-11-27T04:42:16.705Z] Copying: 759/1024 [MB] (28 MBps) [2024-11-27T04:42:17.641Z] Copying: 788/1024 [MB] (28 MBps) [2024-11-27T04:42:19.013Z] Copying: 818/1024 [MB] (30 MBps) [2024-11-27T04:42:19.946Z] Copying: 848/1024 [MB] (29 MBps) [2024-11-27T04:42:20.878Z] Copying: 880/1024 [MB] (31 MBps) [2024-11-27T04:42:21.811Z] Copying: 910/1024 [MB] (29 MBps) [2024-11-27T04:42:22.744Z] Copying: 940/1024 [MB] (30 MBps) [2024-11-27T04:42:23.676Z] Copying: 970/1024 [MB] (29 MBps) [2024-11-27T04:42:24.608Z] Copying: 999/1024 [MB] (28 MBps) [2024-11-27T04:42:25.173Z] Copying: 1024/1024 [MB] (average 31 MBps) 00:25:28.586 00:25:28.586 04:42:25 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@78 -- # sync /dev/nbd0 00:25:28.586 04:42:25 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_stop_disk /dev/nbd0 00:25:28.843 04:42:25 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:25:28.843 [2024-11-27 04:42:25.426553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:28.843 [2024-11-27 04:42:25.426601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:28.843 [2024-11-27 04:42:25.426612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:25:28.843 [2024-11-27 04:42:25.426621] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:28.843 [2024-11-27 04:42:25.426640] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:29.103 [2024-11-27 04:42:25.428739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:29.103 [2024-11-27 04:42:25.428763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:29.103 [2024-11-27 04:42:25.428773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.083 ms 00:25:29.103 [2024-11-27 04:42:25.428779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.103 [2024-11-27 04:42:25.430488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:29.103 [2024-11-27 04:42:25.430514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:29.103 [2024-11-27 04:42:25.430524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.683 ms 00:25:29.103 [2024-11-27 04:42:25.430530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.103 [2024-11-27 04:42:25.443344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:29.103 [2024-11-27 04:42:25.443370] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:25:29.103 [2024-11-27 04:42:25.443380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.795 ms 00:25:29.103 [2024-11-27 04:42:25.443387] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.103 [2024-11-27 04:42:25.448154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:29.103 [2024-11-27 04:42:25.448268] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:25:29.103 [2024-11-27 04:42:25.448283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.740 ms 00:25:29.103 [2024-11-27 04:42:25.448290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.103 [2024-11-27 04:42:25.466278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:29.103 [2024-11-27 04:42:25.466306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:25:29.103 [2024-11-27 04:42:25.466316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.940 ms 00:25:29.103 [2024-11-27 04:42:25.466322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.103 [2024-11-27 04:42:25.478354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:29.103 [2024-11-27 04:42:25.478382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:25:29.103 [2024-11-27 04:42:25.478394] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.000 ms 00:25:29.103 [2024-11-27 04:42:25.478402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.103 [2024-11-27 04:42:25.478525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:29.103 [2024-11-27 04:42:25.478534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:25:29.103 [2024-11-27 04:42:25.478542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.086 ms 00:25:29.103 [2024-11-27 04:42:25.478548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.103 [2024-11-27 04:42:25.496276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:29.103 [2024-11-27 04:42:25.496302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:25:29.103 [2024-11-27 04:42:25.496311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.712 ms 00:25:29.103 [2024-11-27 04:42:25.496317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.103 [2024-11-27 04:42:25.513754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:29.103 [2024-11-27 04:42:25.513918] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:25:29.103 [2024-11-27 04:42:25.513934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.408 ms 00:25:29.103 [2024-11-27 04:42:25.513940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.103 [2024-11-27 04:42:25.531307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:29.103 [2024-11-27 04:42:25.531332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:25:29.103 [2024-11-27 04:42:25.531342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.312 ms 00:25:29.103 [2024-11-27 04:42:25.531347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.103 [2024-11-27 04:42:25.548328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:29.103 [2024-11-27 04:42:25.548353] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:25:29.103 [2024-11-27 04:42:25.548362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.920 ms 00:25:29.103 [2024-11-27 04:42:25.548369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.103 [2024-11-27 04:42:25.548398] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:29.103 [2024-11-27 04:42:25.548411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:25:29.103 [2024-11-27 04:42:25.548420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:25:29.103 [2024-11-27 04:42:25.548427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:29.103 [2024-11-27 04:42:25.548434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:29.103 [2024-11-27 04:42:25.548441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:29.103 [2024-11-27 04:42:25.548448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:29.103 [2024-11-27 04:42:25.548454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:29.103 [2024-11-27 04:42:25.548463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:29.103 [2024-11-27 04:42:25.548469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:29.103 [2024-11-27 04:42:25.548476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:29.103 [2024-11-27 04:42:25.548482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:29.103 [2024-11-27 04:42:25.548489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:29.103 [2024-11-27 04:42:25.548495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:29.103 [2024-11-27 04:42:25.548502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:29.103 [2024-11-27 04:42:25.548508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:29.103 [2024-11-27 04:42:25.548514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:29.103 [2024-11-27 04:42:25.548520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:29.103 [2024-11-27 04:42:25.548527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:29.103 [2024-11-27 04:42:25.548533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:29.103 [2024-11-27 04:42:25.548540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:29.103 [2024-11-27 04:42:25.548546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:29.103 [2024-11-27 04:42:25.548554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:29.103 [2024-11-27 04:42:25.548560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:29.103 [2024-11-27 04:42:25.548568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:29.103 [2024-11-27 04:42:25.548574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:29.103 [2024-11-27 04:42:25.548581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:29.103 [2024-11-27 04:42:25.548588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:29.103 [2024-11-27 04:42:25.548595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:29.103 [2024-11-27 04:42:25.548601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:29.103 [2024-11-27 04:42:25.548608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:29.103 [2024-11-27 04:42:25.548614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:29.103 [2024-11-27 04:42:25.548622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:29.103 [2024-11-27 04:42:25.548628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:29.103 [2024-11-27 04:42:25.548635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:29.103 [2024-11-27 04:42:25.548641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:29.103 [2024-11-27 04:42:25.548648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:29.103 [2024-11-27 04:42:25.548653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:29.103 [2024-11-27 04:42:25.548660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:29.103 [2024-11-27 04:42:25.548666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:29.103 [2024-11-27 04:42:25.548674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:29.103 [2024-11-27 04:42:25.548680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:29.103 [2024-11-27 04:42:25.548687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:29.103 [2024-11-27 04:42:25.548693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:29.103 [2024-11-27 04:42:25.548700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:29.103 [2024-11-27 04:42:25.548705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:29.103 [2024-11-27 04:42:25.548712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:29.103 [2024-11-27 04:42:25.548736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:29.103 [2024-11-27 04:42:25.548744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:29.103 [2024-11-27 04:42:25.548749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:29.103 [2024-11-27 04:42:25.548756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:29.103 [2024-11-27 04:42:25.548763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:29.103 [2024-11-27 04:42:25.548770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:29.103 [2024-11-27 04:42:25.548776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:29.103 [2024-11-27 04:42:25.548789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:29.103 [2024-11-27 04:42:25.548795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:29.103 [2024-11-27 04:42:25.548803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:29.103 [2024-11-27 04:42:25.548809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:29.104 [2024-11-27 04:42:25.548817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:29.104 [2024-11-27 04:42:25.548822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:29.104 [2024-11-27 04:42:25.548830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:29.104 [2024-11-27 04:42:25.548836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:29.104 [2024-11-27 04:42:25.548843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:29.104 [2024-11-27 04:42:25.548849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:29.104 [2024-11-27 04:42:25.548857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:29.104 [2024-11-27 04:42:25.548863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:29.104 [2024-11-27 04:42:25.548870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:29.104 [2024-11-27 04:42:25.548876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:29.104 [2024-11-27 04:42:25.548883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:29.104 [2024-11-27 04:42:25.548888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:29.104 [2024-11-27 04:42:25.548895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:29.104 [2024-11-27 04:42:25.548901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:29.104 [2024-11-27 04:42:25.548910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:29.104 [2024-11-27 04:42:25.548916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:29.104 [2024-11-27 04:42:25.548923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:29.104 [2024-11-27 04:42:25.548929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:29.104 [2024-11-27 04:42:25.548948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:29.104 [2024-11-27 04:42:25.548954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:29.104 [2024-11-27 04:42:25.548961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:29.104 [2024-11-27 04:42:25.548967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:29.104 [2024-11-27 04:42:25.548974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:29.104 [2024-11-27 04:42:25.548980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:29.104 [2024-11-27 04:42:25.548987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:29.104 [2024-11-27 04:42:25.548993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:29.104 [2024-11-27 04:42:25.549000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:29.104 [2024-11-27 04:42:25.549006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:29.104 [2024-11-27 04:42:25.549013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:29.104 [2024-11-27 04:42:25.549018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:29.104 [2024-11-27 04:42:25.549026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:29.104 [2024-11-27 04:42:25.549032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:29.104 [2024-11-27 04:42:25.549039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:29.104 [2024-11-27 04:42:25.549045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:29.104 [2024-11-27 04:42:25.549052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:29.104 [2024-11-27 04:42:25.549058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:29.104 [2024-11-27 04:42:25.549065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:29.104 [2024-11-27 04:42:25.549074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:29.104 [2024-11-27 04:42:25.549081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:29.104 [2024-11-27 04:42:25.549087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:29.104 [2024-11-27 04:42:25.549094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:29.104 [2024-11-27 04:42:25.549100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:29.104 [2024-11-27 04:42:25.549108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:29.104 [2024-11-27 04:42:25.549120] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:29.104 [2024-11-27 04:42:25.549128] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: b2275ce7-e06a-4d66-8d6c-890a04538f03 00:25:29.104 [2024-11-27 04:42:25.549134] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:25:29.104 [2024-11-27 04:42:25.549142] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:25:29.104 [2024-11-27 04:42:25.549149] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:25:29.104 [2024-11-27 04:42:25.549156] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:25:29.104 [2024-11-27 04:42:25.549162] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:29.104 [2024-11-27 04:42:25.549169] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:29.104 [2024-11-27 04:42:25.549174] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:29.104 [2024-11-27 04:42:25.549181] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:29.104 [2024-11-27 04:42:25.549185] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:29.104 [2024-11-27 04:42:25.549192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:29.104 [2024-11-27 04:42:25.549198] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:29.104 [2024-11-27 04:42:25.549205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.795 ms 00:25:29.104 [2024-11-27 04:42:25.549211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.104 [2024-11-27 04:42:25.558838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:29.104 [2024-11-27 04:42:25.558862] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:29.104 [2024-11-27 04:42:25.558872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.603 ms 00:25:29.104 [2024-11-27 04:42:25.558879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.104 [2024-11-27 04:42:25.559156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:29.104 [2024-11-27 04:42:25.559170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:29.104 [2024-11-27 04:42:25.559179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.258 ms 00:25:29.104 [2024-11-27 04:42:25.559185] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.104 [2024-11-27 04:42:25.592212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:29.104 [2024-11-27 04:42:25.592247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:29.104 [2024-11-27 04:42:25.592257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:29.104 [2024-11-27 04:42:25.592264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.104 [2024-11-27 04:42:25.592318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:29.104 [2024-11-27 04:42:25.592325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:29.104 [2024-11-27 04:42:25.592333] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:29.104 [2024-11-27 04:42:25.592340] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.104 [2024-11-27 04:42:25.592397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:29.104 [2024-11-27 04:42:25.592406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:29.104 [2024-11-27 04:42:25.592414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:29.104 [2024-11-27 04:42:25.592419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.104 [2024-11-27 04:42:25.592436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:29.104 [2024-11-27 04:42:25.592442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:29.104 [2024-11-27 04:42:25.592449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:29.104 [2024-11-27 04:42:25.592455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.104 [2024-11-27 04:42:25.652924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:29.104 [2024-11-27 04:42:25.652974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:29.104 [2024-11-27 04:42:25.652986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:29.104 [2024-11-27 04:42:25.652992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.363 [2024-11-27 04:42:25.702105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:29.363 [2024-11-27 04:42:25.702151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:29.363 [2024-11-27 04:42:25.702161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:29.363 [2024-11-27 04:42:25.702168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.363 [2024-11-27 04:42:25.702266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:29.363 [2024-11-27 04:42:25.702275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:29.363 [2024-11-27 04:42:25.702285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:29.363 [2024-11-27 04:42:25.702291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.363 [2024-11-27 04:42:25.702331] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:29.363 [2024-11-27 04:42:25.702338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:29.363 [2024-11-27 04:42:25.702346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:29.363 [2024-11-27 04:42:25.702352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.363 [2024-11-27 04:42:25.702424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:29.363 [2024-11-27 04:42:25.702432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:29.363 [2024-11-27 04:42:25.702440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:29.363 [2024-11-27 04:42:25.702448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.363 [2024-11-27 04:42:25.702477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:29.363 [2024-11-27 04:42:25.702484] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:29.363 [2024-11-27 04:42:25.702491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:29.363 [2024-11-27 04:42:25.702501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.363 [2024-11-27 04:42:25.702531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:29.363 [2024-11-27 04:42:25.702537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:29.363 [2024-11-27 04:42:25.702545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:29.363 [2024-11-27 04:42:25.702553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.363 [2024-11-27 04:42:25.702589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:29.363 [2024-11-27 04:42:25.702596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:29.363 [2024-11-27 04:42:25.702604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:29.363 [2024-11-27 04:42:25.702610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.363 [2024-11-27 04:42:25.702713] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 276.132 ms, result 0 00:25:29.363 true 00:25:29.363 04:42:25 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@83 -- # kill -9 78409 00:25:29.363 04:42:25 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@84 -- # rm -f /dev/shm/spdk_tgt_trace.pid78409 00:25:29.363 04:42:25 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --bs=4096 --count=262144 00:25:29.363 [2024-11-27 04:42:25.795656] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:25:29.363 [2024-11-27 04:42:25.795793] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78984 ] 00:25:29.621 [2024-11-27 04:42:25.951373] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:29.621 [2024-11-27 04:42:26.033271] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:30.994  [2024-11-27T04:42:28.514Z] Copying: 252/1024 [MB] (252 MBps) [2024-11-27T04:42:29.446Z] Copying: 505/1024 [MB] (253 MBps) [2024-11-27T04:42:30.380Z] Copying: 758/1024 [MB] (252 MBps) [2024-11-27T04:42:30.380Z] Copying: 1008/1024 [MB] (250 MBps) [2024-11-27T04:42:30.946Z] Copying: 1024/1024 [MB] (average 252 MBps) 00:25:34.359 00:25:34.359 /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh: line 87: 78409 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x1 00:25:34.359 04:42:30 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --ob=ftl0 --count=262144 --seek=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:25:34.359 [2024-11-27 04:42:30.923139] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:25:34.359 [2024-11-27 04:42:30.923260] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79043 ] 00:25:34.618 [2024-11-27 04:42:31.078923] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:34.618 [2024-11-27 04:42:31.160601] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:34.876 [2024-11-27 04:42:31.373994] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:34.877 [2024-11-27 04:42:31.374052] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:34.877 [2024-11-27 04:42:31.436666] blobstore.c:4896:bs_recover: *NOTICE*: Performing recovery on blobstore 00:25:34.877 [2024-11-27 04:42:31.437051] blobstore.c:4843:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:25:34.877 [2024-11-27 04:42:31.437321] blobstore.c:4843:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:25:35.137 [2024-11-27 04:42:31.604508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.137 [2024-11-27 04:42:31.604687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:25:35.137 [2024-11-27 04:42:31.604703] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:25:35.137 [2024-11-27 04:42:31.604713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.137 [2024-11-27 04:42:31.604777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.137 [2024-11-27 04:42:31.604787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:35.137 [2024-11-27 04:42:31.604795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:25:35.137 [2024-11-27 04:42:31.604801] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.137 [2024-11-27 04:42:31.604817] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:25:35.137 [2024-11-27 04:42:31.605351] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:25:35.137 [2024-11-27 04:42:31.605364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.137 [2024-11-27 04:42:31.605370] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:35.137 [2024-11-27 04:42:31.605377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.551 ms 00:25:35.137 [2024-11-27 04:42:31.605383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.137 [2024-11-27 04:42:31.606346] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:25:35.137 [2024-11-27 04:42:31.616075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.137 [2024-11-27 04:42:31.616104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:25:35.137 [2024-11-27 04:42:31.616113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.731 ms 00:25:35.137 [2024-11-27 04:42:31.616119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.137 [2024-11-27 04:42:31.616167] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.137 [2024-11-27 04:42:31.616175] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:25:35.137 [2024-11-27 04:42:31.616182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:25:35.137 [2024-11-27 04:42:31.616187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.137 [2024-11-27 04:42:31.620620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.137 [2024-11-27 04:42:31.620647] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:35.137 [2024-11-27 04:42:31.620655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.388 ms 00:25:35.137 [2024-11-27 04:42:31.620661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.137 [2024-11-27 04:42:31.620716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.137 [2024-11-27 04:42:31.620736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:35.137 [2024-11-27 04:42:31.620743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:25:35.137 [2024-11-27 04:42:31.620749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.137 [2024-11-27 04:42:31.620786] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.137 [2024-11-27 04:42:31.620794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:25:35.137 [2024-11-27 04:42:31.620800] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:25:35.137 [2024-11-27 04:42:31.620806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.137 [2024-11-27 04:42:31.620824] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:35.137 [2024-11-27 04:42:31.623457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.137 [2024-11-27 04:42:31.623602] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:35.137 [2024-11-27 04:42:31.623615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.637 ms 00:25:35.137 [2024-11-27 04:42:31.623622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.137 [2024-11-27 04:42:31.623646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.137 [2024-11-27 04:42:31.623653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:25:35.137 [2024-11-27 04:42:31.623660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:25:35.137 [2024-11-27 04:42:31.623666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.137 [2024-11-27 04:42:31.623686] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:25:35.137 [2024-11-27 04:42:31.623701] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:25:35.137 [2024-11-27 04:42:31.623739] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:25:35.137 [2024-11-27 04:42:31.623752] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:25:35.137 [2024-11-27 04:42:31.623834] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:25:35.137 [2024-11-27 04:42:31.623843] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:25:35.137 [2024-11-27 04:42:31.623852] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:25:35.137 [2024-11-27 04:42:31.623861] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:25:35.137 [2024-11-27 04:42:31.623869] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:25:35.137 [2024-11-27 04:42:31.623875] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:25:35.137 [2024-11-27 04:42:31.623881] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:25:35.137 [2024-11-27 04:42:31.623887] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:25:35.137 [2024-11-27 04:42:31.623893] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:25:35.137 [2024-11-27 04:42:31.623898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.137 [2024-11-27 04:42:31.623904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:25:35.137 [2024-11-27 04:42:31.623910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.214 ms 00:25:35.137 [2024-11-27 04:42:31.623916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.137 [2024-11-27 04:42:31.623980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.137 [2024-11-27 04:42:31.623989] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:25:35.137 [2024-11-27 04:42:31.623995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:25:35.138 [2024-11-27 04:42:31.624000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.138 [2024-11-27 04:42:31.624079] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:25:35.138 [2024-11-27 04:42:31.624087] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:25:35.138 [2024-11-27 04:42:31.624093] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:35.138 [2024-11-27 04:42:31.624099] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:35.138 [2024-11-27 04:42:31.624105] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:25:35.138 [2024-11-27 04:42:31.624110] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:25:35.138 [2024-11-27 04:42:31.624116] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:25:35.138 [2024-11-27 04:42:31.624121] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:25:35.138 [2024-11-27 04:42:31.624128] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:25:35.138 [2024-11-27 04:42:31.624137] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:35.138 [2024-11-27 04:42:31.624143] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:25:35.138 [2024-11-27 04:42:31.624148] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:25:35.138 [2024-11-27 04:42:31.624154] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:35.138 [2024-11-27 04:42:31.624159] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:25:35.138 [2024-11-27 04:42:31.624164] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:25:35.138 [2024-11-27 04:42:31.624174] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:35.138 [2024-11-27 04:42:31.624180] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:25:35.138 [2024-11-27 04:42:31.624185] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:25:35.138 [2024-11-27 04:42:31.624190] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:35.138 [2024-11-27 04:42:31.624196] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:25:35.138 [2024-11-27 04:42:31.624201] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:25:35.138 [2024-11-27 04:42:31.624206] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:35.138 [2024-11-27 04:42:31.624211] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:25:35.138 [2024-11-27 04:42:31.624217] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:25:35.138 [2024-11-27 04:42:31.624221] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:35.138 [2024-11-27 04:42:31.624227] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:25:35.138 [2024-11-27 04:42:31.624232] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:25:35.138 [2024-11-27 04:42:31.624237] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:35.138 [2024-11-27 04:42:31.624242] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:25:35.138 [2024-11-27 04:42:31.624248] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:25:35.138 [2024-11-27 04:42:31.624253] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:35.138 [2024-11-27 04:42:31.624258] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:25:35.138 [2024-11-27 04:42:31.624263] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:25:35.138 [2024-11-27 04:42:31.624268] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:35.138 [2024-11-27 04:42:31.624273] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:25:35.138 [2024-11-27 04:42:31.624278] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:25:35.138 [2024-11-27 04:42:31.624283] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:35.138 [2024-11-27 04:42:31.624288] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:25:35.138 [2024-11-27 04:42:31.624293] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:25:35.138 [2024-11-27 04:42:31.624298] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:35.138 [2024-11-27 04:42:31.624303] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:25:35.138 [2024-11-27 04:42:31.624308] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:25:35.138 [2024-11-27 04:42:31.624313] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:35.138 [2024-11-27 04:42:31.624318] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:25:35.138 [2024-11-27 04:42:31.624324] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:25:35.138 [2024-11-27 04:42:31.624331] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:35.138 [2024-11-27 04:42:31.624336] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:35.138 [2024-11-27 04:42:31.624344] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:25:35.138 [2024-11-27 04:42:31.624350] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:25:35.138 [2024-11-27 04:42:31.624355] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:25:35.138 [2024-11-27 04:42:31.624360] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:25:35.138 [2024-11-27 04:42:31.624365] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:25:35.138 [2024-11-27 04:42:31.624370] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:25:35.138 [2024-11-27 04:42:31.624376] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:25:35.138 [2024-11-27 04:42:31.624383] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:35.138 [2024-11-27 04:42:31.624389] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:25:35.138 [2024-11-27 04:42:31.624395] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:25:35.138 [2024-11-27 04:42:31.624400] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:25:35.138 [2024-11-27 04:42:31.624406] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:25:35.138 [2024-11-27 04:42:31.624411] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:25:35.138 [2024-11-27 04:42:31.624417] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:25:35.138 [2024-11-27 04:42:31.624422] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:25:35.138 [2024-11-27 04:42:31.624427] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:25:35.138 [2024-11-27 04:42:31.624433] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:25:35.138 [2024-11-27 04:42:31.624438] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:25:35.138 [2024-11-27 04:42:31.624444] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:25:35.138 [2024-11-27 04:42:31.624449] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:25:35.138 [2024-11-27 04:42:31.624454] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:25:35.138 [2024-11-27 04:42:31.624460] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:25:35.138 [2024-11-27 04:42:31.624465] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:25:35.138 [2024-11-27 04:42:31.624472] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:35.138 [2024-11-27 04:42:31.624478] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:35.138 [2024-11-27 04:42:31.624483] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:25:35.138 [2024-11-27 04:42:31.624489] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:25:35.138 [2024-11-27 04:42:31.624495] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:25:35.138 [2024-11-27 04:42:31.624500] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.138 [2024-11-27 04:42:31.624506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:25:35.138 [2024-11-27 04:42:31.624513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.475 ms 00:25:35.138 [2024-11-27 04:42:31.624518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.138 [2024-11-27 04:42:31.645731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.138 [2024-11-27 04:42:31.645864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:35.138 [2024-11-27 04:42:31.645877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.166 ms 00:25:35.138 [2024-11-27 04:42:31.645884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.138 [2024-11-27 04:42:31.645958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.138 [2024-11-27 04:42:31.645965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:25:35.138 [2024-11-27 04:42:31.645971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:25:35.138 [2024-11-27 04:42:31.645977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.138 [2024-11-27 04:42:31.687907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.138 [2024-11-27 04:42:31.687949] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:35.138 [2024-11-27 04:42:31.687962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.880 ms 00:25:35.138 [2024-11-27 04:42:31.687968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.138 [2024-11-27 04:42:31.688013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.138 [2024-11-27 04:42:31.688021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:35.138 [2024-11-27 04:42:31.688028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:25:35.138 [2024-11-27 04:42:31.688034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.138 [2024-11-27 04:42:31.688363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.138 [2024-11-27 04:42:31.688377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:35.138 [2024-11-27 04:42:31.688383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.282 ms 00:25:35.138 [2024-11-27 04:42:31.688392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.138 [2024-11-27 04:42:31.688489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.138 [2024-11-27 04:42:31.688496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:35.138 [2024-11-27 04:42:31.688502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.082 ms 00:25:35.138 [2024-11-27 04:42:31.688508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.138 [2024-11-27 04:42:31.699153] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.138 [2024-11-27 04:42:31.699275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:35.138 [2024-11-27 04:42:31.699288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.628 ms 00:25:35.138 [2024-11-27 04:42:31.699294] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.138 [2024-11-27 04:42:31.709184] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:25:35.138 [2024-11-27 04:42:31.709216] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:25:35.138 [2024-11-27 04:42:31.709226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.138 [2024-11-27 04:42:31.709233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:25:35.138 [2024-11-27 04:42:31.709240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.831 ms 00:25:35.138 [2024-11-27 04:42:31.709246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.397 [2024-11-27 04:42:31.728219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.397 [2024-11-27 04:42:31.728254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:25:35.397 [2024-11-27 04:42:31.728263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.942 ms 00:25:35.397 [2024-11-27 04:42:31.728271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.397 [2024-11-27 04:42:31.737501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.397 [2024-11-27 04:42:31.737528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:25:35.397 [2024-11-27 04:42:31.737536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.193 ms 00:25:35.397 [2024-11-27 04:42:31.737541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.397 [2024-11-27 04:42:31.746360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.397 [2024-11-27 04:42:31.746385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:25:35.397 [2024-11-27 04:42:31.746393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.789 ms 00:25:35.397 [2024-11-27 04:42:31.746398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.397 [2024-11-27 04:42:31.746887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.397 [2024-11-27 04:42:31.746912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:25:35.397 [2024-11-27 04:42:31.746919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.431 ms 00:25:35.397 [2024-11-27 04:42:31.746925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.397 [2024-11-27 04:42:31.790602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.397 [2024-11-27 04:42:31.790650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:25:35.397 [2024-11-27 04:42:31.790662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.661 ms 00:25:35.397 [2024-11-27 04:42:31.790668] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.397 [2024-11-27 04:42:31.799050] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:25:35.397 [2024-11-27 04:42:31.801233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.397 [2024-11-27 04:42:31.801349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:25:35.397 [2024-11-27 04:42:31.801361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.526 ms 00:25:35.397 [2024-11-27 04:42:31.801372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.397 [2024-11-27 04:42:31.801436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.397 [2024-11-27 04:42:31.801444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:25:35.397 [2024-11-27 04:42:31.801451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:25:35.397 [2024-11-27 04:42:31.801457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.397 [2024-11-27 04:42:31.801521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.397 [2024-11-27 04:42:31.801529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:25:35.397 [2024-11-27 04:42:31.801536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:25:35.397 [2024-11-27 04:42:31.801542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.397 [2024-11-27 04:42:31.801558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.397 [2024-11-27 04:42:31.801564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:25:35.397 [2024-11-27 04:42:31.801570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:25:35.397 [2024-11-27 04:42:31.801576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.397 [2024-11-27 04:42:31.801600] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:25:35.397 [2024-11-27 04:42:31.801607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.397 [2024-11-27 04:42:31.801613] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:25:35.397 [2024-11-27 04:42:31.801619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:25:35.397 [2024-11-27 04:42:31.801627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.397 [2024-11-27 04:42:31.819160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.397 [2024-11-27 04:42:31.819188] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:25:35.397 [2024-11-27 04:42:31.819197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.519 ms 00:25:35.397 [2024-11-27 04:42:31.819203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.397 [2024-11-27 04:42:31.819260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.397 [2024-11-27 04:42:31.819268] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:25:35.397 [2024-11-27 04:42:31.819275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:25:35.397 [2024-11-27 04:42:31.819280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.397 [2024-11-27 04:42:31.820035] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 215.155 ms, result 0 00:25:36.328  [2024-11-27T04:42:33.989Z] Copying: 47/1024 [MB] (47 MBps) [2024-11-27T04:42:34.919Z] Copying: 93/1024 [MB] (45 MBps) [2024-11-27T04:42:35.849Z] Copying: 140/1024 [MB] (46 MBps) [2024-11-27T04:42:37.219Z] Copying: 186/1024 [MB] (46 MBps) [2024-11-27T04:42:38.151Z] Copying: 232/1024 [MB] (45 MBps) [2024-11-27T04:42:39.084Z] Copying: 272/1024 [MB] (40 MBps) [2024-11-27T04:42:40.018Z] Copying: 317/1024 [MB] (45 MBps) [2024-11-27T04:42:40.951Z] Copying: 361/1024 [MB] (43 MBps) [2024-11-27T04:42:41.883Z] Copying: 404/1024 [MB] (43 MBps) [2024-11-27T04:42:43.256Z] Copying: 449/1024 [MB] (45 MBps) [2024-11-27T04:42:44.189Z] Copying: 497/1024 [MB] (47 MBps) [2024-11-27T04:42:45.160Z] Copying: 541/1024 [MB] (44 MBps) [2024-11-27T04:42:46.092Z] Copying: 588/1024 [MB] (46 MBps) [2024-11-27T04:42:47.024Z] Copying: 632/1024 [MB] (44 MBps) [2024-11-27T04:42:47.956Z] Copying: 677/1024 [MB] (45 MBps) [2024-11-27T04:42:48.889Z] Copying: 718/1024 [MB] (41 MBps) [2024-11-27T04:42:49.883Z] Copying: 763/1024 [MB] (44 MBps) [2024-11-27T04:42:51.257Z] Copying: 808/1024 [MB] (44 MBps) [2024-11-27T04:42:52.190Z] Copying: 851/1024 [MB] (43 MBps) [2024-11-27T04:42:53.123Z] Copying: 894/1024 [MB] (43 MBps) [2024-11-27T04:42:54.054Z] Copying: 939/1024 [MB] (45 MBps) [2024-11-27T04:42:54.987Z] Copying: 988/1024 [MB] (48 MBps) [2024-11-27T04:42:55.922Z] Copying: 1023/1024 [MB] (34 MBps) [2024-11-27T04:42:55.922Z] Copying: 1024/1024 [MB] (average 42 MBps)[2024-11-27 04:42:55.681317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:59.335 [2024-11-27 04:42:55.681371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:59.335 [2024-11-27 04:42:55.681385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:25:59.335 [2024-11-27 04:42:55.681394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:59.335 [2024-11-27 04:42:55.682320] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:59.335 [2024-11-27 04:42:55.686953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:59.335 [2024-11-27 04:42:55.686987] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:59.335 [2024-11-27 04:42:55.686999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.612 ms 00:25:59.335 [2024-11-27 04:42:55.687011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:59.335 [2024-11-27 04:42:55.699123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:59.335 [2024-11-27 04:42:55.699156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:59.335 [2024-11-27 04:42:55.699166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.976 ms 00:25:59.335 [2024-11-27 04:42:55.699173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:59.335 [2024-11-27 04:42:55.717424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:59.335 [2024-11-27 04:42:55.717456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:25:59.335 [2024-11-27 04:42:55.717466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.236 ms 00:25:59.335 [2024-11-27 04:42:55.717473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:59.335 [2024-11-27 04:42:55.723656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:59.335 [2024-11-27 04:42:55.723682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:25:59.335 [2024-11-27 04:42:55.723692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.153 ms 00:25:59.335 [2024-11-27 04:42:55.723701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:59.335 [2024-11-27 04:42:55.748419] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:59.335 [2024-11-27 04:42:55.748457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:25:59.335 [2024-11-27 04:42:55.748469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.654 ms 00:25:59.335 [2024-11-27 04:42:55.748477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:59.335 [2024-11-27 04:42:55.762649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:59.335 [2024-11-27 04:42:55.762688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:25:59.335 [2024-11-27 04:42:55.762701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.134 ms 00:25:59.335 [2024-11-27 04:42:55.762710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:59.335 [2024-11-27 04:42:55.817221] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:59.335 [2024-11-27 04:42:55.817267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:25:59.335 [2024-11-27 04:42:55.817283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 54.443 ms 00:25:59.335 [2024-11-27 04:42:55.817291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:59.335 [2024-11-27 04:42:55.840151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:59.335 [2024-11-27 04:42:55.840182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:25:59.335 [2024-11-27 04:42:55.840194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.846 ms 00:25:59.335 [2024-11-27 04:42:55.840209] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:59.335 [2024-11-27 04:42:55.862524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:59.335 [2024-11-27 04:42:55.862663] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:25:59.335 [2024-11-27 04:42:55.862679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.285 ms 00:25:59.335 [2024-11-27 04:42:55.862686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:59.335 [2024-11-27 04:42:55.885173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:59.335 [2024-11-27 04:42:55.885304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:25:59.335 [2024-11-27 04:42:55.885319] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.459 ms 00:25:59.335 [2024-11-27 04:42:55.885327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:59.335 [2024-11-27 04:42:55.907386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:59.335 [2024-11-27 04:42:55.907498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:25:59.335 [2024-11-27 04:42:55.907585] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.902 ms 00:25:59.335 [2024-11-27 04:42:55.907607] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:59.335 [2024-11-27 04:42:55.907659] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:59.335 [2024-11-27 04:42:55.907739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 129024 / 261120 wr_cnt: 1 state: open 00:25:59.336 [2024-11-27 04:42:55.907776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:25:59.336 [2024-11-27 04:42:55.907805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:59.336 [2024-11-27 04:42:55.907833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:59.336 [2024-11-27 04:42:55.907860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:59.336 [2024-11-27 04:42:55.907889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:59.336 [2024-11-27 04:42:55.907954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:59.336 [2024-11-27 04:42:55.908020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:59.336 [2024-11-27 04:42:55.908070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:59.336 [2024-11-27 04:42:55.908100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:59.336 [2024-11-27 04:42:55.908128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:59.336 [2024-11-27 04:42:55.908157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:59.336 [2024-11-27 04:42:55.908280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:59.336 [2024-11-27 04:42:55.908309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:59.336 [2024-11-27 04:42:55.908337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:59.336 [2024-11-27 04:42:55.908365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:59.336 [2024-11-27 04:42:55.908394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:59.336 [2024-11-27 04:42:55.908452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:59.336 [2024-11-27 04:42:55.908481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:59.336 [2024-11-27 04:42:55.908509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:59.336 [2024-11-27 04:42:55.908537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:59.336 [2024-11-27 04:42:55.908565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:59.336 [2024-11-27 04:42:55.908619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:59.336 [2024-11-27 04:42:55.908650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:59.336 [2024-11-27 04:42:55.908677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:59.336 [2024-11-27 04:42:55.908705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:59.336 [2024-11-27 04:42:55.908749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:59.336 [2024-11-27 04:42:55.908780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:59.336 [2024-11-27 04:42:55.908863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:59.336 [2024-11-27 04:42:55.908891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:59.336 [2024-11-27 04:42:55.908918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:59.336 [2024-11-27 04:42:55.908946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:59.336 [2024-11-27 04:42:55.908982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:59.336 [2024-11-27 04:42:55.909054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:59.336 [2024-11-27 04:42:55.909083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:59.336 [2024-11-27 04:42:55.909111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:59.336 [2024-11-27 04:42:55.909139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:59.336 [2024-11-27 04:42:55.909166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:59.336 [2024-11-27 04:42:55.909225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:59.336 [2024-11-27 04:42:55.909254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:59.336 [2024-11-27 04:42:55.909282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:59.336 [2024-11-27 04:42:55.909310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:59.336 [2024-11-27 04:42:55.909337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:59.336 [2024-11-27 04:42:55.909394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:59.336 [2024-11-27 04:42:55.909423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:59.336 [2024-11-27 04:42:55.909450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:59.336 [2024-11-27 04:42:55.909478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:59.336 [2024-11-27 04:42:55.909506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:59.336 [2024-11-27 04:42:55.909557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:59.336 [2024-11-27 04:42:55.909586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:59.336 [2024-11-27 04:42:55.909613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:59.336 [2024-11-27 04:42:55.909641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:59.336 [2024-11-27 04:42:55.909668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:59.336 [2024-11-27 04:42:55.909775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:59.336 [2024-11-27 04:42:55.909802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:59.336 [2024-11-27 04:42:55.909829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:59.336 [2024-11-27 04:42:55.909857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:59.336 [2024-11-27 04:42:55.909927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:59.336 [2024-11-27 04:42:55.909980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:59.336 [2024-11-27 04:42:55.910008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:59.336 [2024-11-27 04:42:55.910036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:59.336 [2024-11-27 04:42:55.910064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:59.336 [2024-11-27 04:42:55.910092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:59.336 [2024-11-27 04:42:55.910151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:59.336 [2024-11-27 04:42:55.910180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:59.336 [2024-11-27 04:42:55.910207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:59.336 [2024-11-27 04:42:55.910234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:59.336 [2024-11-27 04:42:55.910287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:59.336 [2024-11-27 04:42:55.910318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:59.336 [2024-11-27 04:42:55.910346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:59.336 [2024-11-27 04:42:55.910373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:59.336 [2024-11-27 04:42:55.910402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:59.336 [2024-11-27 04:42:55.910459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:59.336 [2024-11-27 04:42:55.910487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:59.336 [2024-11-27 04:42:55.910516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:59.336 [2024-11-27 04:42:55.910543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:59.336 [2024-11-27 04:42:55.910571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:59.336 [2024-11-27 04:42:55.910620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:59.336 [2024-11-27 04:42:55.910649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:59.336 [2024-11-27 04:42:55.910676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:59.336 [2024-11-27 04:42:55.910703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:59.336 [2024-11-27 04:42:55.910740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:59.336 [2024-11-27 04:42:55.910811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:59.336 [2024-11-27 04:42:55.910844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:59.336 [2024-11-27 04:42:55.910872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:59.336 [2024-11-27 04:42:55.910899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:59.336 [2024-11-27 04:42:55.910927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:59.336 [2024-11-27 04:42:55.910985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:59.336 [2024-11-27 04:42:55.911014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:59.337 [2024-11-27 04:42:55.911041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:59.337 [2024-11-27 04:42:55.911069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:59.337 [2024-11-27 04:42:55.911096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:59.337 [2024-11-27 04:42:55.911152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:59.337 [2024-11-27 04:42:55.911181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:59.337 [2024-11-27 04:42:55.911208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:59.337 [2024-11-27 04:42:55.911236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:59.337 [2024-11-27 04:42:55.911263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:59.337 [2024-11-27 04:42:55.911313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:59.337 [2024-11-27 04:42:55.911344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:59.337 [2024-11-27 04:42:55.911372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:59.337 [2024-11-27 04:42:55.911407] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:59.337 [2024-11-27 04:42:55.911425] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: b2275ce7-e06a-4d66-8d6c-890a04538f03 00:25:59.337 [2024-11-27 04:42:55.911464] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 129024 00:25:59.337 [2024-11-27 04:42:55.911514] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 129984 00:25:59.337 [2024-11-27 04:42:55.911535] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 129024 00:25:59.337 [2024-11-27 04:42:55.911554] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0074 00:25:59.337 [2024-11-27 04:42:55.911572] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:59.337 [2024-11-27 04:42:55.911590] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:59.337 [2024-11-27 04:42:55.911608] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:59.337 [2024-11-27 04:42:55.911625] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:59.337 [2024-11-27 04:42:55.911698] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:59.337 [2024-11-27 04:42:55.911715] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:59.337 [2024-11-27 04:42:55.911744] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:59.337 [2024-11-27 04:42:55.911763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.058 ms 00:25:59.337 [2024-11-27 04:42:55.911781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:59.595 [2024-11-27 04:42:55.924019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:59.595 [2024-11-27 04:42:55.924119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:59.595 [2024-11-27 04:42:55.924195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.210 ms 00:25:59.595 [2024-11-27 04:42:55.924296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:59.595 [2024-11-27 04:42:55.924671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:59.595 [2024-11-27 04:42:55.924765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:59.595 [2024-11-27 04:42:55.924822] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.316 ms 00:25:59.595 [2024-11-27 04:42:55.924845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:59.595 [2024-11-27 04:42:55.957539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:59.595 [2024-11-27 04:42:55.957573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:59.595 [2024-11-27 04:42:55.957583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:59.595 [2024-11-27 04:42:55.957590] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:59.595 [2024-11-27 04:42:55.957647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:59.595 [2024-11-27 04:42:55.957655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:59.595 [2024-11-27 04:42:55.957665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:59.595 [2024-11-27 04:42:55.957673] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:59.595 [2024-11-27 04:42:55.957745] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:59.595 [2024-11-27 04:42:55.957755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:59.595 [2024-11-27 04:42:55.957763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:59.595 [2024-11-27 04:42:55.957771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:59.595 [2024-11-27 04:42:55.957785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:59.595 [2024-11-27 04:42:55.957793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:59.595 [2024-11-27 04:42:55.957800] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:59.595 [2024-11-27 04:42:55.957808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:59.595 [2024-11-27 04:42:56.035575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:59.595 [2024-11-27 04:42:56.035619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:59.595 [2024-11-27 04:42:56.035630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:59.595 [2024-11-27 04:42:56.035638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:59.595 [2024-11-27 04:42:56.098741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:59.595 [2024-11-27 04:42:56.098909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:59.595 [2024-11-27 04:42:56.098925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:59.595 [2024-11-27 04:42:56.098938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:59.595 [2024-11-27 04:42:56.099003] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:59.595 [2024-11-27 04:42:56.099012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:59.595 [2024-11-27 04:42:56.099019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:59.595 [2024-11-27 04:42:56.099027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:59.595 [2024-11-27 04:42:56.099060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:59.595 [2024-11-27 04:42:56.099068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:59.595 [2024-11-27 04:42:56.099076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:59.595 [2024-11-27 04:42:56.099083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:59.595 [2024-11-27 04:42:56.099172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:59.595 [2024-11-27 04:42:56.099181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:59.595 [2024-11-27 04:42:56.099189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:59.595 [2024-11-27 04:42:56.099196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:59.595 [2024-11-27 04:42:56.099224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:59.595 [2024-11-27 04:42:56.099232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:59.595 [2024-11-27 04:42:56.099240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:59.595 [2024-11-27 04:42:56.099247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:59.595 [2024-11-27 04:42:56.099280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:59.595 [2024-11-27 04:42:56.099289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:59.595 [2024-11-27 04:42:56.099297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:59.595 [2024-11-27 04:42:56.099304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:59.595 [2024-11-27 04:42:56.099342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:59.595 [2024-11-27 04:42:56.099351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:59.595 [2024-11-27 04:42:56.099359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:59.595 [2024-11-27 04:42:56.099366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:59.595 [2024-11-27 04:42:56.099474] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 420.920 ms, result 0 00:26:00.973 00:26:00.973 00:26:00.973 04:42:57 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@90 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:26:03.499 04:42:59 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@93 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --count=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:26:03.499 [2024-11-27 04:42:59.552145] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:26:03.499 [2024-11-27 04:42:59.552272] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79337 ] 00:26:03.499 [2024-11-27 04:42:59.710671] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:03.499 [2024-11-27 04:42:59.810404] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:03.499 [2024-11-27 04:43:00.069213] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:03.499 [2024-11-27 04:43:00.069278] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:03.764 [2024-11-27 04:43:00.222352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:03.764 [2024-11-27 04:43:00.222518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:26:03.764 [2024-11-27 04:43:00.222539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:26:03.764 [2024-11-27 04:43:00.222549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.764 [2024-11-27 04:43:00.222602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:03.764 [2024-11-27 04:43:00.222615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:03.764 [2024-11-27 04:43:00.222623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:26:03.764 [2024-11-27 04:43:00.222630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.764 [2024-11-27 04:43:00.222650] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:26:03.764 [2024-11-27 04:43:00.223312] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:26:03.764 [2024-11-27 04:43:00.223334] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:03.764 [2024-11-27 04:43:00.223342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:03.764 [2024-11-27 04:43:00.223351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.689 ms 00:26:03.764 [2024-11-27 04:43:00.223358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.764 [2024-11-27 04:43:00.224381] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:26:03.764 [2024-11-27 04:43:00.236330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:03.764 [2024-11-27 04:43:00.236363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:26:03.764 [2024-11-27 04:43:00.236376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.950 ms 00:26:03.764 [2024-11-27 04:43:00.236384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.764 [2024-11-27 04:43:00.236439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:03.764 [2024-11-27 04:43:00.236449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:26:03.764 [2024-11-27 04:43:00.236457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:26:03.764 [2024-11-27 04:43:00.236464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.764 [2024-11-27 04:43:00.241223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:03.764 [2024-11-27 04:43:00.241359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:03.764 [2024-11-27 04:43:00.241373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.702 ms 00:26:03.764 [2024-11-27 04:43:00.241386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.764 [2024-11-27 04:43:00.241452] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:03.764 [2024-11-27 04:43:00.241461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:03.764 [2024-11-27 04:43:00.241469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:26:03.764 [2024-11-27 04:43:00.241476] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.764 [2024-11-27 04:43:00.241524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:03.764 [2024-11-27 04:43:00.241534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:26:03.764 [2024-11-27 04:43:00.241542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:26:03.764 [2024-11-27 04:43:00.241555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.764 [2024-11-27 04:43:00.241578] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:26:03.764 [2024-11-27 04:43:00.244862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:03.764 [2024-11-27 04:43:00.244888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:03.764 [2024-11-27 04:43:00.244900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.289 ms 00:26:03.764 [2024-11-27 04:43:00.244907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.764 [2024-11-27 04:43:00.244934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:03.764 [2024-11-27 04:43:00.244942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:26:03.764 [2024-11-27 04:43:00.244958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:26:03.764 [2024-11-27 04:43:00.244965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.764 [2024-11-27 04:43:00.244984] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:26:03.764 [2024-11-27 04:43:00.245001] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:26:03.764 [2024-11-27 04:43:00.245034] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:26:03.764 [2024-11-27 04:43:00.245051] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:26:03.764 [2024-11-27 04:43:00.245154] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:26:03.764 [2024-11-27 04:43:00.245164] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:26:03.764 [2024-11-27 04:43:00.245175] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:26:03.764 [2024-11-27 04:43:00.245184] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:26:03.764 [2024-11-27 04:43:00.245192] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:26:03.765 [2024-11-27 04:43:00.245200] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:26:03.765 [2024-11-27 04:43:00.245208] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:26:03.765 [2024-11-27 04:43:00.245217] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:26:03.765 [2024-11-27 04:43:00.245225] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:26:03.765 [2024-11-27 04:43:00.245232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:03.765 [2024-11-27 04:43:00.245240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:26:03.765 [2024-11-27 04:43:00.245247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.250 ms 00:26:03.765 [2024-11-27 04:43:00.245255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.765 [2024-11-27 04:43:00.245342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:03.765 [2024-11-27 04:43:00.245350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:26:03.765 [2024-11-27 04:43:00.245357] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 00:26:03.765 [2024-11-27 04:43:00.245364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.765 [2024-11-27 04:43:00.245468] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:26:03.765 [2024-11-27 04:43:00.245478] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:26:03.765 [2024-11-27 04:43:00.245485] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:03.765 [2024-11-27 04:43:00.245493] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:03.765 [2024-11-27 04:43:00.245501] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:26:03.765 [2024-11-27 04:43:00.245508] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:26:03.765 [2024-11-27 04:43:00.245515] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:26:03.765 [2024-11-27 04:43:00.245523] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:26:03.765 [2024-11-27 04:43:00.245529] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:26:03.765 [2024-11-27 04:43:00.245536] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:03.765 [2024-11-27 04:43:00.245542] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:26:03.765 [2024-11-27 04:43:00.245549] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:26:03.765 [2024-11-27 04:43:00.245555] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:03.765 [2024-11-27 04:43:00.245567] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:26:03.765 [2024-11-27 04:43:00.245574] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:26:03.765 [2024-11-27 04:43:00.245582] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:03.765 [2024-11-27 04:43:00.245589] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:26:03.765 [2024-11-27 04:43:00.245596] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:26:03.765 [2024-11-27 04:43:00.245602] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:03.765 [2024-11-27 04:43:00.245609] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:26:03.765 [2024-11-27 04:43:00.245616] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:26:03.765 [2024-11-27 04:43:00.245622] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:03.765 [2024-11-27 04:43:00.245629] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:26:03.765 [2024-11-27 04:43:00.245636] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:26:03.765 [2024-11-27 04:43:00.245643] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:03.765 [2024-11-27 04:43:00.245649] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:26:03.765 [2024-11-27 04:43:00.245655] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:26:03.765 [2024-11-27 04:43:00.245661] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:03.765 [2024-11-27 04:43:00.245667] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:26:03.765 [2024-11-27 04:43:00.245674] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:26:03.765 [2024-11-27 04:43:00.245681] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:03.765 [2024-11-27 04:43:00.245687] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:26:03.765 [2024-11-27 04:43:00.245694] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:26:03.765 [2024-11-27 04:43:00.245700] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:03.765 [2024-11-27 04:43:00.245706] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:26:03.765 [2024-11-27 04:43:00.245713] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:26:03.765 [2024-11-27 04:43:00.245719] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:03.765 [2024-11-27 04:43:00.245737] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:26:03.765 [2024-11-27 04:43:00.245744] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:26:03.765 [2024-11-27 04:43:00.245750] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:03.765 [2024-11-27 04:43:00.245757] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:26:03.765 [2024-11-27 04:43:00.245763] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:26:03.765 [2024-11-27 04:43:00.245770] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:03.765 [2024-11-27 04:43:00.245776] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:26:03.765 [2024-11-27 04:43:00.245784] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:26:03.765 [2024-11-27 04:43:00.245791] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:03.765 [2024-11-27 04:43:00.245798] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:03.765 [2024-11-27 04:43:00.245806] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:26:03.765 [2024-11-27 04:43:00.245813] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:26:03.765 [2024-11-27 04:43:00.245819] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:26:03.765 [2024-11-27 04:43:00.245826] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:26:03.765 [2024-11-27 04:43:00.245833] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:26:03.765 [2024-11-27 04:43:00.245840] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:26:03.765 [2024-11-27 04:43:00.245847] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:26:03.765 [2024-11-27 04:43:00.245856] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:03.765 [2024-11-27 04:43:00.245866] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:26:03.765 [2024-11-27 04:43:00.245874] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:26:03.765 [2024-11-27 04:43:00.245881] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:26:03.765 [2024-11-27 04:43:00.245887] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:26:03.765 [2024-11-27 04:43:00.245894] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:26:03.765 [2024-11-27 04:43:00.245901] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:26:03.765 [2024-11-27 04:43:00.245908] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:26:03.765 [2024-11-27 04:43:00.245915] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:26:03.765 [2024-11-27 04:43:00.245922] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:26:03.765 [2024-11-27 04:43:00.245929] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:26:03.765 [2024-11-27 04:43:00.245936] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:26:03.765 [2024-11-27 04:43:00.245943] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:26:03.765 [2024-11-27 04:43:00.245950] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:26:03.765 [2024-11-27 04:43:00.245957] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:26:03.765 [2024-11-27 04:43:00.245964] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:26:03.765 [2024-11-27 04:43:00.245972] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:03.765 [2024-11-27 04:43:00.245980] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:26:03.765 [2024-11-27 04:43:00.245988] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:26:03.765 [2024-11-27 04:43:00.245994] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:26:03.765 [2024-11-27 04:43:00.246001] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:26:03.765 [2024-11-27 04:43:00.246008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:03.765 [2024-11-27 04:43:00.246015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:26:03.765 [2024-11-27 04:43:00.246022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.612 ms 00:26:03.765 [2024-11-27 04:43:00.246030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.765 [2024-11-27 04:43:00.271845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:03.765 [2024-11-27 04:43:00.271957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:03.765 [2024-11-27 04:43:00.271971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.761 ms 00:26:03.765 [2024-11-27 04:43:00.271984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.765 [2024-11-27 04:43:00.272065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:03.765 [2024-11-27 04:43:00.272073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:26:03.765 [2024-11-27 04:43:00.272081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:26:03.766 [2024-11-27 04:43:00.272088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.766 [2024-11-27 04:43:00.312142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:03.766 [2024-11-27 04:43:00.312182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:03.766 [2024-11-27 04:43:00.312194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.003 ms 00:26:03.766 [2024-11-27 04:43:00.312203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.766 [2024-11-27 04:43:00.312241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:03.766 [2024-11-27 04:43:00.312250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:03.766 [2024-11-27 04:43:00.312261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:26:03.766 [2024-11-27 04:43:00.312269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.766 [2024-11-27 04:43:00.312592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:03.766 [2024-11-27 04:43:00.312608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:03.766 [2024-11-27 04:43:00.312617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.273 ms 00:26:03.766 [2024-11-27 04:43:00.312624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.766 [2024-11-27 04:43:00.312765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:03.766 [2024-11-27 04:43:00.312775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:03.766 [2024-11-27 04:43:00.312783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.124 ms 00:26:03.766 [2024-11-27 04:43:00.312814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.766 [2024-11-27 04:43:00.325717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:03.766 [2024-11-27 04:43:00.325762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:03.766 [2024-11-27 04:43:00.325774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.883 ms 00:26:03.766 [2024-11-27 04:43:00.325782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.766 [2024-11-27 04:43:00.337834] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:26:03.766 [2024-11-27 04:43:00.337866] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:26:03.766 [2024-11-27 04:43:00.337877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:03.766 [2024-11-27 04:43:00.337885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:26:03.766 [2024-11-27 04:43:00.337894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.993 ms 00:26:03.766 [2024-11-27 04:43:00.337901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:04.024 [2024-11-27 04:43:00.361881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:04.024 [2024-11-27 04:43:00.361930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:26:04.024 [2024-11-27 04:43:00.361940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.943 ms 00:26:04.024 [2024-11-27 04:43:00.361948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:04.024 [2024-11-27 04:43:00.373119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:04.024 [2024-11-27 04:43:00.373151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:26:04.024 [2024-11-27 04:43:00.373160] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.134 ms 00:26:04.024 [2024-11-27 04:43:00.373167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:04.024 [2024-11-27 04:43:00.384250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:04.024 [2024-11-27 04:43:00.384377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:26:04.024 [2024-11-27 04:43:00.384392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.052 ms 00:26:04.024 [2024-11-27 04:43:00.384400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:04.024 [2024-11-27 04:43:00.385008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:04.024 [2024-11-27 04:43:00.385028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:26:04.024 [2024-11-27 04:43:00.385040] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.530 ms 00:26:04.024 [2024-11-27 04:43:00.385048] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:04.024 [2024-11-27 04:43:00.439505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:04.024 [2024-11-27 04:43:00.439683] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:26:04.024 [2024-11-27 04:43:00.439709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 54.439 ms 00:26:04.024 [2024-11-27 04:43:00.439717] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:04.024 [2024-11-27 04:43:00.450083] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:26:04.024 [2024-11-27 04:43:00.452603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:04.024 [2024-11-27 04:43:00.452632] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:26:04.024 [2024-11-27 04:43:00.452644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.826 ms 00:26:04.024 [2024-11-27 04:43:00.452653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:04.024 [2024-11-27 04:43:00.452762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:04.024 [2024-11-27 04:43:00.452773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:26:04.024 [2024-11-27 04:43:00.452784] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:26:04.024 [2024-11-27 04:43:00.452791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:04.024 [2024-11-27 04:43:00.454213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:04.024 [2024-11-27 04:43:00.454244] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:26:04.024 [2024-11-27 04:43:00.454254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.385 ms 00:26:04.024 [2024-11-27 04:43:00.454262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:04.024 [2024-11-27 04:43:00.454286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:04.024 [2024-11-27 04:43:00.454294] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:26:04.024 [2024-11-27 04:43:00.454303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:26:04.024 [2024-11-27 04:43:00.454310] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:04.024 [2024-11-27 04:43:00.454346] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:26:04.024 [2024-11-27 04:43:00.454356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:04.024 [2024-11-27 04:43:00.454363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:26:04.024 [2024-11-27 04:43:00.454371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:26:04.024 [2024-11-27 04:43:00.454379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:04.024 [2024-11-27 04:43:00.477334] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:04.024 [2024-11-27 04:43:00.477454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:26:04.024 [2024-11-27 04:43:00.477475] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.939 ms 00:26:04.024 [2024-11-27 04:43:00.477484] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:04.024 [2024-11-27 04:43:00.477551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:04.024 [2024-11-27 04:43:00.477560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:26:04.024 [2024-11-27 04:43:00.477568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:26:04.024 [2024-11-27 04:43:00.477575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:04.024 [2024-11-27 04:43:00.478453] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 255.700 ms, result 0 00:26:05.397  [2024-11-27T04:43:02.917Z] Copying: 1324/1048576 [kB] (1324 kBps) [2024-11-27T04:43:03.849Z] Copying: 5588/1048576 [kB] (4264 kBps) [2024-11-27T04:43:04.781Z] Copying: 52/1024 [MB] (46 MBps) [2024-11-27T04:43:05.713Z] Copying: 105/1024 [MB] (52 MBps) [2024-11-27T04:43:07.085Z] Copying: 156/1024 [MB] (51 MBps) [2024-11-27T04:43:08.037Z] Copying: 206/1024 [MB] (49 MBps) [2024-11-27T04:43:08.985Z] Copying: 254/1024 [MB] (47 MBps) [2024-11-27T04:43:09.920Z] Copying: 300/1024 [MB] (45 MBps) [2024-11-27T04:43:10.854Z] Copying: 349/1024 [MB] (49 MBps) [2024-11-27T04:43:11.787Z] Copying: 397/1024 [MB] (47 MBps) [2024-11-27T04:43:12.721Z] Copying: 445/1024 [MB] (47 MBps) [2024-11-27T04:43:14.095Z] Copying: 492/1024 [MB] (47 MBps) [2024-11-27T04:43:14.662Z] Copying: 524/1024 [MB] (32 MBps) [2024-11-27T04:43:16.036Z] Copying: 561/1024 [MB] (37 MBps) [2024-11-27T04:43:16.971Z] Copying: 609/1024 [MB] (47 MBps) [2024-11-27T04:43:17.903Z] Copying: 656/1024 [MB] (46 MBps) [2024-11-27T04:43:18.835Z] Copying: 704/1024 [MB] (48 MBps) [2024-11-27T04:43:19.768Z] Copying: 734/1024 [MB] (29 MBps) [2024-11-27T04:43:20.700Z] Copying: 760/1024 [MB] (26 MBps) [2024-11-27T04:43:22.072Z] Copying: 803/1024 [MB] (42 MBps) [2024-11-27T04:43:23.003Z] Copying: 851/1024 [MB] (48 MBps) [2024-11-27T04:43:23.934Z] Copying: 903/1024 [MB] (51 MBps) [2024-11-27T04:43:24.864Z] Copying: 954/1024 [MB] (51 MBps) [2024-11-27T04:43:25.120Z] Copying: 1005/1024 [MB] (51 MBps) [2024-11-27T04:43:26.053Z] Copying: 1024/1024 [MB] (average 42 MBps)[2024-11-27 04:43:26.033287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:29.466 [2024-11-27 04:43:26.033347] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:26:29.466 [2024-11-27 04:43:26.033362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:26:29.466 [2024-11-27 04:43:26.033370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:29.466 [2024-11-27 04:43:26.033391] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:26:29.466 [2024-11-27 04:43:26.036002] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:29.466 [2024-11-27 04:43:26.036170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:26:29.466 [2024-11-27 04:43:26.036188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.596 ms 00:26:29.466 [2024-11-27 04:43:26.036195] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:29.466 [2024-11-27 04:43:26.036420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:29.466 [2024-11-27 04:43:26.036432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:26:29.466 [2024-11-27 04:43:26.036441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.199 ms 00:26:29.466 [2024-11-27 04:43:26.036448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:29.466 [2024-11-27 04:43:26.046479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:29.466 [2024-11-27 04:43:26.046523] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:26:29.466 [2024-11-27 04:43:26.046534] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.016 ms 00:26:29.466 [2024-11-27 04:43:26.046542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:29.725 [2024-11-27 04:43:26.052764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:29.725 [2024-11-27 04:43:26.052793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:26:29.725 [2024-11-27 04:43:26.052809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.196 ms 00:26:29.725 [2024-11-27 04:43:26.052818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:29.725 [2024-11-27 04:43:26.078477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:29.725 [2024-11-27 04:43:26.078520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:26:29.725 [2024-11-27 04:43:26.078532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.620 ms 00:26:29.725 [2024-11-27 04:43:26.078540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:29.725 [2024-11-27 04:43:26.092025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:29.725 [2024-11-27 04:43:26.092171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:26:29.725 [2024-11-27 04:43:26.092188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.447 ms 00:26:29.725 [2024-11-27 04:43:26.092197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:29.725 [2024-11-27 04:43:26.093900] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:29.725 [2024-11-27 04:43:26.093930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:26:29.725 [2024-11-27 04:43:26.093939] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.670 ms 00:26:29.725 [2024-11-27 04:43:26.093952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:29.725 [2024-11-27 04:43:26.116782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:29.725 [2024-11-27 04:43:26.116812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:26:29.725 [2024-11-27 04:43:26.116822] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.815 ms 00:26:29.725 [2024-11-27 04:43:26.116830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:29.725 [2024-11-27 04:43:26.139386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:29.725 [2024-11-27 04:43:26.139513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:26:29.725 [2024-11-27 04:43:26.139528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.526 ms 00:26:29.725 [2024-11-27 04:43:26.139536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:29.725 [2024-11-27 04:43:26.161675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:29.726 [2024-11-27 04:43:26.161706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:26:29.726 [2024-11-27 04:43:26.161716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.111 ms 00:26:29.726 [2024-11-27 04:43:26.161736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:29.726 [2024-11-27 04:43:26.183631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:29.726 [2024-11-27 04:43:26.183663] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:26:29.726 [2024-11-27 04:43:26.183674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.831 ms 00:26:29.726 [2024-11-27 04:43:26.183681] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:29.726 [2024-11-27 04:43:26.183711] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:26:29.726 [2024-11-27 04:43:26.183740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:26:29.726 [2024-11-27 04:43:26.183750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:26:29.726 [2024-11-27 04:43:26.183759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:26:29.726 [2024-11-27 04:43:26.183766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:26:29.726 [2024-11-27 04:43:26.183774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:26:29.726 [2024-11-27 04:43:26.183781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:26:29.726 [2024-11-27 04:43:26.183789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:26:29.726 [2024-11-27 04:43:26.183796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:26:29.726 [2024-11-27 04:43:26.183804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:26:29.726 [2024-11-27 04:43:26.183811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:26:29.726 [2024-11-27 04:43:26.183818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:26:29.726 [2024-11-27 04:43:26.183826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:26:29.726 [2024-11-27 04:43:26.183834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:26:29.726 [2024-11-27 04:43:26.183841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:26:29.726 [2024-11-27 04:43:26.183848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:26:29.726 [2024-11-27 04:43:26.183856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:26:29.726 [2024-11-27 04:43:26.183863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:26:29.726 [2024-11-27 04:43:26.183870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:26:29.726 [2024-11-27 04:43:26.183878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:26:29.726 [2024-11-27 04:43:26.183885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:26:29.726 [2024-11-27 04:43:26.183893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:26:29.726 [2024-11-27 04:43:26.183900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:26:29.726 [2024-11-27 04:43:26.183908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:26:29.726 [2024-11-27 04:43:26.183915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:26:29.726 [2024-11-27 04:43:26.183922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:26:29.726 [2024-11-27 04:43:26.183929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:26:29.726 [2024-11-27 04:43:26.183938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:26:29.726 [2024-11-27 04:43:26.183945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:26:29.726 [2024-11-27 04:43:26.183952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:26:29.726 [2024-11-27 04:43:26.183961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:26:29.726 [2024-11-27 04:43:26.183969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:26:29.726 [2024-11-27 04:43:26.183976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:26:29.726 [2024-11-27 04:43:26.183989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:26:29.726 [2024-11-27 04:43:26.183997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:26:29.726 [2024-11-27 04:43:26.184004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:26:29.726 [2024-11-27 04:43:26.184011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:26:29.726 [2024-11-27 04:43:26.184018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:26:29.726 [2024-11-27 04:43:26.184026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:26:29.726 [2024-11-27 04:43:26.184033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:26:29.726 [2024-11-27 04:43:26.184041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:26:29.726 [2024-11-27 04:43:26.184048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:26:29.726 [2024-11-27 04:43:26.184055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:26:29.726 [2024-11-27 04:43:26.184063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:26:29.726 [2024-11-27 04:43:26.184071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:26:29.726 [2024-11-27 04:43:26.184078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:26:29.726 [2024-11-27 04:43:26.184086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:26:29.726 [2024-11-27 04:43:26.184093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:26:29.726 [2024-11-27 04:43:26.184100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:26:29.726 [2024-11-27 04:43:26.184107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:26:29.726 [2024-11-27 04:43:26.184114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:26:29.726 [2024-11-27 04:43:26.184122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:26:29.726 [2024-11-27 04:43:26.184129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:26:29.726 [2024-11-27 04:43:26.184136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:26:29.726 [2024-11-27 04:43:26.184144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:26:29.726 [2024-11-27 04:43:26.184151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:26:29.726 [2024-11-27 04:43:26.184158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:26:29.726 [2024-11-27 04:43:26.184165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:26:29.726 [2024-11-27 04:43:26.184173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:26:29.726 [2024-11-27 04:43:26.184180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:26:29.726 [2024-11-27 04:43:26.184187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:26:29.726 [2024-11-27 04:43:26.184194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:26:29.726 [2024-11-27 04:43:26.184204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:26:29.726 [2024-11-27 04:43:26.184212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:26:29.726 [2024-11-27 04:43:26.184220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:26:29.726 [2024-11-27 04:43:26.184227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:26:29.726 [2024-11-27 04:43:26.184235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:26:29.726 [2024-11-27 04:43:26.184242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:26:29.726 [2024-11-27 04:43:26.184249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:26:29.726 [2024-11-27 04:43:26.184256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:26:29.726 [2024-11-27 04:43:26.184263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:26:29.726 [2024-11-27 04:43:26.184270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:26:29.726 [2024-11-27 04:43:26.184278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:26:29.727 [2024-11-27 04:43:26.184286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:26:29.727 [2024-11-27 04:43:26.184293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:26:29.727 [2024-11-27 04:43:26.184301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:26:29.727 [2024-11-27 04:43:26.184310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:26:29.727 [2024-11-27 04:43:26.184317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:26:29.727 [2024-11-27 04:43:26.184325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:26:29.727 [2024-11-27 04:43:26.184332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:26:29.727 [2024-11-27 04:43:26.184339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:26:29.727 [2024-11-27 04:43:26.184348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:26:29.727 [2024-11-27 04:43:26.184355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:26:29.727 [2024-11-27 04:43:26.184362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:26:29.727 [2024-11-27 04:43:26.184369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:26:29.727 [2024-11-27 04:43:26.184376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:26:29.727 [2024-11-27 04:43:26.184384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:26:29.727 [2024-11-27 04:43:26.184391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:26:29.727 [2024-11-27 04:43:26.184398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:26:29.727 [2024-11-27 04:43:26.184405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:26:29.727 [2024-11-27 04:43:26.184413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:26:29.727 [2024-11-27 04:43:26.184420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:26:29.727 [2024-11-27 04:43:26.184427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:26:29.727 [2024-11-27 04:43:26.184434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:26:29.727 [2024-11-27 04:43:26.184442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:26:29.727 [2024-11-27 04:43:26.184450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:26:29.727 [2024-11-27 04:43:26.184458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:26:29.727 [2024-11-27 04:43:26.184465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:26:29.727 [2024-11-27 04:43:26.184472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:26:29.727 [2024-11-27 04:43:26.184480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:26:29.727 [2024-11-27 04:43:26.184487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:26:29.727 [2024-11-27 04:43:26.184503] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:26:29.727 [2024-11-27 04:43:26.184511] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: b2275ce7-e06a-4d66-8d6c-890a04538f03 00:26:29.727 [2024-11-27 04:43:26.184518] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:26:29.727 [2024-11-27 04:43:26.184526] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 135616 00:26:29.727 [2024-11-27 04:43:26.184536] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 133632 00:26:29.727 [2024-11-27 04:43:26.184544] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0148 00:26:29.727 [2024-11-27 04:43:26.184551] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:26:29.727 [2024-11-27 04:43:26.184565] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:26:29.727 [2024-11-27 04:43:26.184572] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:26:29.727 [2024-11-27 04:43:26.184579] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:26:29.727 [2024-11-27 04:43:26.184585] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:26:29.727 [2024-11-27 04:43:26.184593] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:29.727 [2024-11-27 04:43:26.184600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:26:29.727 [2024-11-27 04:43:26.184608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.882 ms 00:26:29.727 [2024-11-27 04:43:26.184614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:29.727 [2024-11-27 04:43:26.196736] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:29.727 [2024-11-27 04:43:26.196776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:26:29.727 [2024-11-27 04:43:26.196786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.107 ms 00:26:29.727 [2024-11-27 04:43:26.196795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:29.727 [2024-11-27 04:43:26.197137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:29.727 [2024-11-27 04:43:26.197149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:26:29.727 [2024-11-27 04:43:26.197157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.326 ms 00:26:29.727 [2024-11-27 04:43:26.197164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:29.727 [2024-11-27 04:43:26.229805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:29.727 [2024-11-27 04:43:26.229844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:29.727 [2024-11-27 04:43:26.229856] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:29.727 [2024-11-27 04:43:26.229865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:29.727 [2024-11-27 04:43:26.229926] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:29.727 [2024-11-27 04:43:26.229934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:29.727 [2024-11-27 04:43:26.229942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:29.727 [2024-11-27 04:43:26.229950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:29.727 [2024-11-27 04:43:26.230010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:29.727 [2024-11-27 04:43:26.230019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:29.727 [2024-11-27 04:43:26.230027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:29.727 [2024-11-27 04:43:26.230034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:29.727 [2024-11-27 04:43:26.230048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:29.727 [2024-11-27 04:43:26.230056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:29.727 [2024-11-27 04:43:26.230064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:29.727 [2024-11-27 04:43:26.230071] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:29.727 [2024-11-27 04:43:26.308071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:29.727 [2024-11-27 04:43:26.308120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:29.727 [2024-11-27 04:43:26.308132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:29.727 [2024-11-27 04:43:26.308141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:29.985 [2024-11-27 04:43:26.372706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:29.985 [2024-11-27 04:43:26.372774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:29.985 [2024-11-27 04:43:26.372787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:29.985 [2024-11-27 04:43:26.372796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:29.985 [2024-11-27 04:43:26.372866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:29.985 [2024-11-27 04:43:26.372880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:29.985 [2024-11-27 04:43:26.372889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:29.985 [2024-11-27 04:43:26.372896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:29.985 [2024-11-27 04:43:26.372929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:29.985 [2024-11-27 04:43:26.372938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:29.985 [2024-11-27 04:43:26.372946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:29.985 [2024-11-27 04:43:26.372953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:29.985 [2024-11-27 04:43:26.373083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:29.985 [2024-11-27 04:43:26.373094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:29.985 [2024-11-27 04:43:26.373105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:29.985 [2024-11-27 04:43:26.373113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:29.985 [2024-11-27 04:43:26.373141] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:29.985 [2024-11-27 04:43:26.373150] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:26:29.985 [2024-11-27 04:43:26.373157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:29.985 [2024-11-27 04:43:26.373164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:29.985 [2024-11-27 04:43:26.373197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:29.985 [2024-11-27 04:43:26.373205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:29.985 [2024-11-27 04:43:26.373216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:29.985 [2024-11-27 04:43:26.373224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:29.985 [2024-11-27 04:43:26.373261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:29.985 [2024-11-27 04:43:26.373275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:29.985 [2024-11-27 04:43:26.373283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:29.985 [2024-11-27 04:43:26.373290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:29.985 [2024-11-27 04:43:26.373396] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 340.085 ms, result 0 00:26:30.917 00:26:30.917 00:26:30.917 04:43:27 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@94 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:26:33.442 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:26:33.442 04:43:29 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@95 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --count=262144 --skip=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:26:33.442 [2024-11-27 04:43:29.488092] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:26:33.442 [2024-11-27 04:43:29.488219] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79652 ] 00:26:33.442 [2024-11-27 04:43:29.648167] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:33.442 [2024-11-27 04:43:29.753658] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:33.442 [2024-11-27 04:43:30.017616] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:33.442 [2024-11-27 04:43:30.017684] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:33.700 [2024-11-27 04:43:30.171224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:33.700 [2024-11-27 04:43:30.171417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:26:33.700 [2024-11-27 04:43:30.171436] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:26:33.700 [2024-11-27 04:43:30.171446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:33.700 [2024-11-27 04:43:30.171506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:33.700 [2024-11-27 04:43:30.171519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:33.700 [2024-11-27 04:43:30.171527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:26:33.700 [2024-11-27 04:43:30.171534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:33.700 [2024-11-27 04:43:30.171554] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:26:33.700 [2024-11-27 04:43:30.172307] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:26:33.700 [2024-11-27 04:43:30.172323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:33.700 [2024-11-27 04:43:30.172331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:33.700 [2024-11-27 04:43:30.172340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.774 ms 00:26:33.700 [2024-11-27 04:43:30.172347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:33.700 [2024-11-27 04:43:30.173505] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:26:33.700 [2024-11-27 04:43:30.185835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:33.700 [2024-11-27 04:43:30.185968] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:26:33.700 [2024-11-27 04:43:30.185986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.331 ms 00:26:33.700 [2024-11-27 04:43:30.185994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:33.700 [2024-11-27 04:43:30.186048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:33.700 [2024-11-27 04:43:30.186059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:26:33.700 [2024-11-27 04:43:30.186067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:26:33.700 [2024-11-27 04:43:30.186074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:33.700 [2024-11-27 04:43:30.191120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:33.700 [2024-11-27 04:43:30.191150] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:33.700 [2024-11-27 04:43:30.191160] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.985 ms 00:26:33.700 [2024-11-27 04:43:30.191171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:33.700 [2024-11-27 04:43:30.191244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:33.700 [2024-11-27 04:43:30.191253] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:33.700 [2024-11-27 04:43:30.191262] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:26:33.700 [2024-11-27 04:43:30.191269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:33.701 [2024-11-27 04:43:30.191309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:33.701 [2024-11-27 04:43:30.191318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:26:33.701 [2024-11-27 04:43:30.191326] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:26:33.701 [2024-11-27 04:43:30.191333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:33.701 [2024-11-27 04:43:30.191358] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:26:33.701 [2024-11-27 04:43:30.194668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:33.701 [2024-11-27 04:43:30.194694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:33.701 [2024-11-27 04:43:30.194706] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.316 ms 00:26:33.701 [2024-11-27 04:43:30.194713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:33.701 [2024-11-27 04:43:30.194755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:33.701 [2024-11-27 04:43:30.194764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:26:33.701 [2024-11-27 04:43:30.194773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:26:33.701 [2024-11-27 04:43:30.194779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:33.701 [2024-11-27 04:43:30.194798] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:26:33.701 [2024-11-27 04:43:30.194816] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:26:33.701 [2024-11-27 04:43:30.194850] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:26:33.701 [2024-11-27 04:43:30.194867] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:26:33.701 [2024-11-27 04:43:30.194968] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:26:33.701 [2024-11-27 04:43:30.194978] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:26:33.701 [2024-11-27 04:43:30.194988] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:26:33.701 [2024-11-27 04:43:30.194997] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:26:33.701 [2024-11-27 04:43:30.195006] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:26:33.701 [2024-11-27 04:43:30.195014] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:26:33.701 [2024-11-27 04:43:30.195022] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:26:33.701 [2024-11-27 04:43:30.195031] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:26:33.701 [2024-11-27 04:43:30.195039] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:26:33.701 [2024-11-27 04:43:30.195051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:33.701 [2024-11-27 04:43:30.195059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:26:33.701 [2024-11-27 04:43:30.195066] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.255 ms 00:26:33.701 [2024-11-27 04:43:30.195073] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:33.701 [2024-11-27 04:43:30.195155] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:33.701 [2024-11-27 04:43:30.195168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:26:33.701 [2024-11-27 04:43:30.195175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:26:33.701 [2024-11-27 04:43:30.195183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:33.701 [2024-11-27 04:43:30.195297] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:26:33.701 [2024-11-27 04:43:30.195308] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:26:33.701 [2024-11-27 04:43:30.195316] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:33.701 [2024-11-27 04:43:30.195324] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:33.701 [2024-11-27 04:43:30.195331] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:26:33.701 [2024-11-27 04:43:30.195338] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:26:33.701 [2024-11-27 04:43:30.195344] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:26:33.701 [2024-11-27 04:43:30.195352] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:26:33.701 [2024-11-27 04:43:30.195358] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:26:33.701 [2024-11-27 04:43:30.195365] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:33.701 [2024-11-27 04:43:30.195371] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:26:33.701 [2024-11-27 04:43:30.195378] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:26:33.701 [2024-11-27 04:43:30.195384] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:33.701 [2024-11-27 04:43:30.195396] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:26:33.701 [2024-11-27 04:43:30.195404] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:26:33.701 [2024-11-27 04:43:30.195411] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:33.701 [2024-11-27 04:43:30.195418] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:26:33.701 [2024-11-27 04:43:30.195424] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:26:33.701 [2024-11-27 04:43:30.195430] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:33.701 [2024-11-27 04:43:30.195437] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:26:33.701 [2024-11-27 04:43:30.195443] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:26:33.701 [2024-11-27 04:43:30.195450] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:33.701 [2024-11-27 04:43:30.195456] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:26:33.701 [2024-11-27 04:43:30.195462] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:26:33.701 [2024-11-27 04:43:30.195469] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:33.701 [2024-11-27 04:43:30.195475] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:26:33.701 [2024-11-27 04:43:30.195481] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:26:33.701 [2024-11-27 04:43:30.195488] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:33.701 [2024-11-27 04:43:30.195494] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:26:33.701 [2024-11-27 04:43:30.195501] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:26:33.701 [2024-11-27 04:43:30.195507] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:33.701 [2024-11-27 04:43:30.195513] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:26:33.701 [2024-11-27 04:43:30.195519] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:26:33.701 [2024-11-27 04:43:30.195526] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:33.701 [2024-11-27 04:43:30.195532] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:26:33.701 [2024-11-27 04:43:30.195539] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:26:33.701 [2024-11-27 04:43:30.195545] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:33.701 [2024-11-27 04:43:30.195552] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:26:33.701 [2024-11-27 04:43:30.195558] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:26:33.701 [2024-11-27 04:43:30.195564] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:33.701 [2024-11-27 04:43:30.195570] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:26:33.701 [2024-11-27 04:43:30.195577] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:26:33.701 [2024-11-27 04:43:30.195583] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:33.701 [2024-11-27 04:43:30.195590] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:26:33.701 [2024-11-27 04:43:30.195597] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:26:33.701 [2024-11-27 04:43:30.195604] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:33.701 [2024-11-27 04:43:30.195612] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:33.701 [2024-11-27 04:43:30.195620] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:26:33.701 [2024-11-27 04:43:30.195627] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:26:33.701 [2024-11-27 04:43:30.195633] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:26:33.701 [2024-11-27 04:43:30.195640] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:26:33.701 [2024-11-27 04:43:30.195646] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:26:33.701 [2024-11-27 04:43:30.195653] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:26:33.701 [2024-11-27 04:43:30.195661] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:26:33.701 [2024-11-27 04:43:30.195669] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:33.701 [2024-11-27 04:43:30.195680] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:26:33.701 [2024-11-27 04:43:30.195687] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:26:33.701 [2024-11-27 04:43:30.195694] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:26:33.701 [2024-11-27 04:43:30.195701] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:26:33.701 [2024-11-27 04:43:30.195707] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:26:33.701 [2024-11-27 04:43:30.195714] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:26:33.701 [2024-11-27 04:43:30.195734] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:26:33.701 [2024-11-27 04:43:30.195741] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:26:33.701 [2024-11-27 04:43:30.195748] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:26:33.701 [2024-11-27 04:43:30.195756] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:26:33.701 [2024-11-27 04:43:30.195764] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:26:33.701 [2024-11-27 04:43:30.195771] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:26:33.702 [2024-11-27 04:43:30.195778] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:26:33.702 [2024-11-27 04:43:30.195786] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:26:33.702 [2024-11-27 04:43:30.195792] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:26:33.702 [2024-11-27 04:43:30.195800] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:33.702 [2024-11-27 04:43:30.195808] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:26:33.702 [2024-11-27 04:43:30.195815] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:26:33.702 [2024-11-27 04:43:30.195823] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:26:33.702 [2024-11-27 04:43:30.195829] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:26:33.702 [2024-11-27 04:43:30.195837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:33.702 [2024-11-27 04:43:30.195844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:26:33.702 [2024-11-27 04:43:30.195852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.608 ms 00:26:33.702 [2024-11-27 04:43:30.195859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:33.702 [2024-11-27 04:43:30.221946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:33.702 [2024-11-27 04:43:30.221980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:33.702 [2024-11-27 04:43:30.221990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.044 ms 00:26:33.702 [2024-11-27 04:43:30.222000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:33.702 [2024-11-27 04:43:30.222080] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:33.702 [2024-11-27 04:43:30.222088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:26:33.702 [2024-11-27 04:43:30.222096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:26:33.702 [2024-11-27 04:43:30.222103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:33.702 [2024-11-27 04:43:30.261044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:33.702 [2024-11-27 04:43:30.261084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:33.702 [2024-11-27 04:43:30.261096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.890 ms 00:26:33.702 [2024-11-27 04:43:30.261104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:33.702 [2024-11-27 04:43:30.261148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:33.702 [2024-11-27 04:43:30.261158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:33.702 [2024-11-27 04:43:30.261170] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:26:33.702 [2024-11-27 04:43:30.261178] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:33.702 [2024-11-27 04:43:30.261538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:33.702 [2024-11-27 04:43:30.261561] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:33.702 [2024-11-27 04:43:30.261570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.305 ms 00:26:33.702 [2024-11-27 04:43:30.261578] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:33.702 [2024-11-27 04:43:30.261699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:33.702 [2024-11-27 04:43:30.261713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:33.702 [2024-11-27 04:43:30.261721] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.104 ms 00:26:33.702 [2024-11-27 04:43:30.261753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:33.702 [2024-11-27 04:43:30.274880] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:33.702 [2024-11-27 04:43:30.274912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:33.702 [2024-11-27 04:43:30.274924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.107 ms 00:26:33.702 [2024-11-27 04:43:30.274932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:33.962 [2024-11-27 04:43:30.287188] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:26:33.962 [2024-11-27 04:43:30.287221] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:26:33.962 [2024-11-27 04:43:30.287233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:33.962 [2024-11-27 04:43:30.287242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:26:33.962 [2024-11-27 04:43:30.287251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.188 ms 00:26:33.962 [2024-11-27 04:43:30.287258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:33.962 [2024-11-27 04:43:30.311412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:33.962 [2024-11-27 04:43:30.311445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:26:33.962 [2024-11-27 04:43:30.311457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.116 ms 00:26:33.962 [2024-11-27 04:43:30.311465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:33.962 [2024-11-27 04:43:30.323148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:33.962 [2024-11-27 04:43:30.323177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:26:33.962 [2024-11-27 04:43:30.323187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.636 ms 00:26:33.962 [2024-11-27 04:43:30.323194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:33.962 [2024-11-27 04:43:30.334293] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:33.962 [2024-11-27 04:43:30.334324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:26:33.962 [2024-11-27 04:43:30.334334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.067 ms 00:26:33.962 [2024-11-27 04:43:30.334341] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:33.962 [2024-11-27 04:43:30.334945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:33.962 [2024-11-27 04:43:30.334969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:26:33.962 [2024-11-27 04:43:30.334981] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.523 ms 00:26:33.962 [2024-11-27 04:43:30.334988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:33.962 [2024-11-27 04:43:30.389701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:33.962 [2024-11-27 04:43:30.389775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:26:33.962 [2024-11-27 04:43:30.389796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 54.694 ms 00:26:33.962 [2024-11-27 04:43:30.389805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:33.962 [2024-11-27 04:43:30.400300] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:26:33.962 [2024-11-27 04:43:30.402910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:33.962 [2024-11-27 04:43:30.402941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:26:33.962 [2024-11-27 04:43:30.402953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.051 ms 00:26:33.962 [2024-11-27 04:43:30.402963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:33.962 [2024-11-27 04:43:30.403065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:33.962 [2024-11-27 04:43:30.403076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:26:33.962 [2024-11-27 04:43:30.403087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:26:33.962 [2024-11-27 04:43:30.403095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:33.962 [2024-11-27 04:43:30.403628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:33.962 [2024-11-27 04:43:30.403653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:26:33.962 [2024-11-27 04:43:30.403663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.496 ms 00:26:33.962 [2024-11-27 04:43:30.403670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:33.962 [2024-11-27 04:43:30.403692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:33.962 [2024-11-27 04:43:30.403700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:26:33.962 [2024-11-27 04:43:30.403708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:26:33.962 [2024-11-27 04:43:30.403718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:33.962 [2024-11-27 04:43:30.403761] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:26:33.962 [2024-11-27 04:43:30.403771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:33.962 [2024-11-27 04:43:30.403779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:26:33.962 [2024-11-27 04:43:30.403786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:26:33.962 [2024-11-27 04:43:30.403793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:33.962 [2024-11-27 04:43:30.426461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:33.962 [2024-11-27 04:43:30.426496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:26:33.962 [2024-11-27 04:43:30.426509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.651 ms 00:26:33.962 [2024-11-27 04:43:30.426518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:33.962 [2024-11-27 04:43:30.426585] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:33.962 [2024-11-27 04:43:30.426595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:26:33.962 [2024-11-27 04:43:30.426603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:26:33.962 [2024-11-27 04:43:30.426610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:33.962 [2024-11-27 04:43:30.427495] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 255.872 ms, result 0 00:26:35.355  [2024-11-27T04:43:32.876Z] Copying: 46/1024 [MB] (46 MBps) [2024-11-27T04:43:33.817Z] Copying: 92/1024 [MB] (46 MBps) [2024-11-27T04:43:34.751Z] Copying: 140/1024 [MB] (48 MBps) [2024-11-27T04:43:35.684Z] Copying: 187/1024 [MB] (46 MBps) [2024-11-27T04:43:36.622Z] Copying: 239/1024 [MB] (51 MBps) [2024-11-27T04:43:37.997Z] Copying: 286/1024 [MB] (47 MBps) [2024-11-27T04:43:38.931Z] Copying: 333/1024 [MB] (46 MBps) [2024-11-27T04:43:39.863Z] Copying: 381/1024 [MB] (48 MBps) [2024-11-27T04:43:40.796Z] Copying: 430/1024 [MB] (49 MBps) [2024-11-27T04:43:41.730Z] Copying: 483/1024 [MB] (52 MBps) [2024-11-27T04:43:42.667Z] Copying: 532/1024 [MB] (49 MBps) [2024-11-27T04:43:43.601Z] Copying: 580/1024 [MB] (48 MBps) [2024-11-27T04:43:44.975Z] Copying: 629/1024 [MB] (48 MBps) [2024-11-27T04:43:45.910Z] Copying: 676/1024 [MB] (46 MBps) [2024-11-27T04:43:46.843Z] Copying: 723/1024 [MB] (47 MBps) [2024-11-27T04:43:47.793Z] Copying: 768/1024 [MB] (44 MBps) [2024-11-27T04:43:48.728Z] Copying: 813/1024 [MB] (45 MBps) [2024-11-27T04:43:49.661Z] Copying: 860/1024 [MB] (46 MBps) [2024-11-27T04:43:51.033Z] Copying: 906/1024 [MB] (46 MBps) [2024-11-27T04:43:51.967Z] Copying: 956/1024 [MB] (49 MBps) [2024-11-27T04:43:52.226Z] Copying: 1003/1024 [MB] (46 MBps) [2024-11-27T04:43:52.226Z] Copying: 1024/1024 [MB] (average 47 MBps)[2024-11-27 04:43:52.070153] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:55.639 [2024-11-27 04:43:52.070203] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:26:55.639 [2024-11-27 04:43:52.070216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:26:55.639 [2024-11-27 04:43:52.070224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:55.639 [2024-11-27 04:43:52.070244] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:26:55.639 [2024-11-27 04:43:52.072962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:55.639 [2024-11-27 04:43:52.073010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:26:55.639 [2024-11-27 04:43:52.073020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.703 ms 00:26:55.639 [2024-11-27 04:43:52.073028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:55.639 [2024-11-27 04:43:52.073246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:55.639 [2024-11-27 04:43:52.073257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:26:55.639 [2024-11-27 04:43:52.073265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.198 ms 00:26:55.639 [2024-11-27 04:43:52.073274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:55.639 [2024-11-27 04:43:52.077303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:55.639 [2024-11-27 04:43:52.077334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:26:55.639 [2024-11-27 04:43:52.077348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.010 ms 00:26:55.639 [2024-11-27 04:43:52.077356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:55.639 [2024-11-27 04:43:52.083614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:55.639 [2024-11-27 04:43:52.083652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:26:55.639 [2024-11-27 04:43:52.083662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.239 ms 00:26:55.640 [2024-11-27 04:43:52.083669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:55.640 [2024-11-27 04:43:52.106984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:55.640 [2024-11-27 04:43:52.107018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:26:55.640 [2024-11-27 04:43:52.107029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.265 ms 00:26:55.640 [2024-11-27 04:43:52.107037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:55.640 [2024-11-27 04:43:52.121041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:55.640 [2024-11-27 04:43:52.121076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:26:55.640 [2024-11-27 04:43:52.121088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.985 ms 00:26:55.640 [2024-11-27 04:43:52.121100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:55.640 [2024-11-27 04:43:52.122701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:55.640 [2024-11-27 04:43:52.122833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:26:55.640 [2024-11-27 04:43:52.122847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.578 ms 00:26:55.640 [2024-11-27 04:43:52.122856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:55.640 [2024-11-27 04:43:52.146043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:55.640 [2024-11-27 04:43:52.146185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:26:55.640 [2024-11-27 04:43:52.146246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.169 ms 00:26:55.640 [2024-11-27 04:43:52.146268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:55.640 [2024-11-27 04:43:52.169338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:55.640 [2024-11-27 04:43:52.169458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:26:55.640 [2024-11-27 04:43:52.169516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.037 ms 00:26:55.640 [2024-11-27 04:43:52.169538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:55.640 [2024-11-27 04:43:52.191871] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:55.640 [2024-11-27 04:43:52.192003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:26:55.640 [2024-11-27 04:43:52.192056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.301 ms 00:26:55.640 [2024-11-27 04:43:52.192080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:55.640 [2024-11-27 04:43:52.214588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:55.640 [2024-11-27 04:43:52.214710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:26:55.640 [2024-11-27 04:43:52.214786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.450 ms 00:26:55.640 [2024-11-27 04:43:52.214809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:55.640 [2024-11-27 04:43:52.214842] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:26:55.640 [2024-11-27 04:43:52.214914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:26:55.640 [2024-11-27 04:43:52.214976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:26:55.640 [2024-11-27 04:43:52.215006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:26:55.640 [2024-11-27 04:43:52.215034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:26:55.640 [2024-11-27 04:43:52.215063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:26:55.640 [2024-11-27 04:43:52.215092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:26:55.640 [2024-11-27 04:43:52.215176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:26:55.640 [2024-11-27 04:43:52.215206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:26:55.640 [2024-11-27 04:43:52.215234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:26:55.640 [2024-11-27 04:43:52.215263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:26:55.640 [2024-11-27 04:43:52.215323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:26:55.640 [2024-11-27 04:43:52.215355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:26:55.640 [2024-11-27 04:43:52.215384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:26:55.640 [2024-11-27 04:43:52.215413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:26:55.640 [2024-11-27 04:43:52.215441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:26:55.640 [2024-11-27 04:43:52.215497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:26:55.640 [2024-11-27 04:43:52.215526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:26:55.640 [2024-11-27 04:43:52.215555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:26:55.640 [2024-11-27 04:43:52.215582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:26:55.640 [2024-11-27 04:43:52.215611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:26:55.640 [2024-11-27 04:43:52.215675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:26:55.640 [2024-11-27 04:43:52.215706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:26:55.640 [2024-11-27 04:43:52.215745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:26:55.640 [2024-11-27 04:43:52.215774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:26:55.640 [2024-11-27 04:43:52.215833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:26:55.640 [2024-11-27 04:43:52.215865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:26:55.640 [2024-11-27 04:43:52.215893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:26:55.640 [2024-11-27 04:43:52.215921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:26:55.640 [2024-11-27 04:43:52.215949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:26:55.640 [2024-11-27 04:43:52.216011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:26:55.640 [2024-11-27 04:43:52.216041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:26:55.640 [2024-11-27 04:43:52.216091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:26:55.640 [2024-11-27 04:43:52.216123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:26:55.640 [2024-11-27 04:43:52.216151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:26:55.640 [2024-11-27 04:43:52.216204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:26:55.640 [2024-11-27 04:43:52.216234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:26:55.640 [2024-11-27 04:43:52.216296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:26:55.640 [2024-11-27 04:43:52.216328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:26:55.640 [2024-11-27 04:43:52.216357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:26:55.640 [2024-11-27 04:43:52.216409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:26:55.640 [2024-11-27 04:43:52.216438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:26:55.640 [2024-11-27 04:43:52.216500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:26:55.640 [2024-11-27 04:43:52.216530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:26:55.640 [2024-11-27 04:43:52.216580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:26:55.640 [2024-11-27 04:43:52.216611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:26:55.640 [2024-11-27 04:43:52.216675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:26:55.640 [2024-11-27 04:43:52.216708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:26:55.640 [2024-11-27 04:43:52.216747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:26:55.640 [2024-11-27 04:43:52.216801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:26:55.640 [2024-11-27 04:43:52.216858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:26:55.641 [2024-11-27 04:43:52.216889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:26:55.641 [2024-11-27 04:43:52.216945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:26:55.641 [2024-11-27 04:43:52.216991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:26:55.641 [2024-11-27 04:43:52.217130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:26:55.641 [2024-11-27 04:43:52.217161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:26:55.641 [2024-11-27 04:43:52.217190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:26:55.641 [2024-11-27 04:43:52.217217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:26:55.641 [2024-11-27 04:43:52.217249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:26:55.641 [2024-11-27 04:43:52.217278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:26:55.641 [2024-11-27 04:43:52.217306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:26:55.641 [2024-11-27 04:43:52.217335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:26:55.641 [2024-11-27 04:43:52.217363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:26:55.641 [2024-11-27 04:43:52.217391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:26:55.641 [2024-11-27 04:43:52.217448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:26:55.641 [2024-11-27 04:43:52.217479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:26:55.641 [2024-11-27 04:43:52.217509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:26:55.641 [2024-11-27 04:43:52.217537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:26:55.641 [2024-11-27 04:43:52.217595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:26:55.641 [2024-11-27 04:43:52.217624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:26:55.641 [2024-11-27 04:43:52.217654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:26:55.641 [2024-11-27 04:43:52.217747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:26:55.641 [2024-11-27 04:43:52.217782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:26:55.641 [2024-11-27 04:43:52.217811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:26:55.641 [2024-11-27 04:43:52.217839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:26:55.641 [2024-11-27 04:43:52.217895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:26:55.641 [2024-11-27 04:43:52.217925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:26:55.641 [2024-11-27 04:43:52.217953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:26:55.641 [2024-11-27 04:43:52.217981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:26:55.641 [2024-11-27 04:43:52.218033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:26:55.641 [2024-11-27 04:43:52.218140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:26:55.641 [2024-11-27 04:43:52.218169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:26:55.641 [2024-11-27 04:43:52.218197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:26:55.641 [2024-11-27 04:43:52.218225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:26:55.641 [2024-11-27 04:43:52.218277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:26:55.641 [2024-11-27 04:43:52.218306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:26:55.641 [2024-11-27 04:43:52.218334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:26:55.641 [2024-11-27 04:43:52.218362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:26:55.641 [2024-11-27 04:43:52.218411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:26:55.641 [2024-11-27 04:43:52.218442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:26:55.641 [2024-11-27 04:43:52.218470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:26:55.641 [2024-11-27 04:43:52.218498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:26:55.641 [2024-11-27 04:43:52.218525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:26:55.641 [2024-11-27 04:43:52.218581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:26:55.641 [2024-11-27 04:43:52.218610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:26:55.641 [2024-11-27 04:43:52.218654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:26:55.641 [2024-11-27 04:43:52.218747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:26:55.641 [2024-11-27 04:43:52.218777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:26:55.641 [2024-11-27 04:43:52.218807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:26:55.641 [2024-11-27 04:43:52.218860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:26:55.641 [2024-11-27 04:43:52.218893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:26:55.641 [2024-11-27 04:43:52.218959] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:26:55.641 [2024-11-27 04:43:52.219047] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: b2275ce7-e06a-4d66-8d6c-890a04538f03 00:26:55.641 [2024-11-27 04:43:52.219080] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:26:55.641 [2024-11-27 04:43:52.219099] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:26:55.641 [2024-11-27 04:43:52.219118] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:26:55.641 [2024-11-27 04:43:52.219137] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:26:55.641 [2024-11-27 04:43:52.219193] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:26:55.641 [2024-11-27 04:43:52.219214] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:26:55.641 [2024-11-27 04:43:52.219234] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:26:55.641 [2024-11-27 04:43:52.219252] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:26:55.641 [2024-11-27 04:43:52.219269] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:26:55.641 [2024-11-27 04:43:52.219288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:55.641 [2024-11-27 04:43:52.219334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:26:55.641 [2024-11-27 04:43:52.219360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.446 ms 00:26:55.641 [2024-11-27 04:43:52.219379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:55.900 [2024-11-27 04:43:52.231823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:55.900 [2024-11-27 04:43:52.231928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:26:55.900 [2024-11-27 04:43:52.231978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.393 ms 00:26:55.900 [2024-11-27 04:43:52.231989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:55.900 [2024-11-27 04:43:52.232348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:55.900 [2024-11-27 04:43:52.232359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:26:55.900 [2024-11-27 04:43:52.232367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.324 ms 00:26:55.900 [2024-11-27 04:43:52.232375] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:55.900 [2024-11-27 04:43:52.265219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:55.900 [2024-11-27 04:43:52.265359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:55.900 [2024-11-27 04:43:52.265374] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:55.900 [2024-11-27 04:43:52.265382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:55.900 [2024-11-27 04:43:52.265450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:55.900 [2024-11-27 04:43:52.265458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:55.900 [2024-11-27 04:43:52.265466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:55.900 [2024-11-27 04:43:52.265473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:55.900 [2024-11-27 04:43:52.265533] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:55.900 [2024-11-27 04:43:52.265543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:55.900 [2024-11-27 04:43:52.265551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:55.900 [2024-11-27 04:43:52.265558] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:55.900 [2024-11-27 04:43:52.265573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:55.900 [2024-11-27 04:43:52.265584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:55.900 [2024-11-27 04:43:52.265591] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:55.900 [2024-11-27 04:43:52.265598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:55.900 [2024-11-27 04:43:52.344856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:55.900 [2024-11-27 04:43:52.344911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:55.900 [2024-11-27 04:43:52.344925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:55.900 [2024-11-27 04:43:52.344933] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:55.900 [2024-11-27 04:43:52.409102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:55.900 [2024-11-27 04:43:52.409160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:55.900 [2024-11-27 04:43:52.409173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:55.900 [2024-11-27 04:43:52.409181] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:55.900 [2024-11-27 04:43:52.409256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:55.900 [2024-11-27 04:43:52.409271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:55.900 [2024-11-27 04:43:52.409283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:55.900 [2024-11-27 04:43:52.409293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:55.900 [2024-11-27 04:43:52.409335] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:55.900 [2024-11-27 04:43:52.409345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:55.900 [2024-11-27 04:43:52.409358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:55.900 [2024-11-27 04:43:52.409365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:55.900 [2024-11-27 04:43:52.409452] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:55.900 [2024-11-27 04:43:52.409461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:55.900 [2024-11-27 04:43:52.409469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:55.900 [2024-11-27 04:43:52.409476] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:55.900 [2024-11-27 04:43:52.409504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:55.900 [2024-11-27 04:43:52.409513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:26:55.900 [2024-11-27 04:43:52.409520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:55.900 [2024-11-27 04:43:52.409530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:55.900 [2024-11-27 04:43:52.409563] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:55.900 [2024-11-27 04:43:52.409571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:55.900 [2024-11-27 04:43:52.409578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:55.900 [2024-11-27 04:43:52.409586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:55.900 [2024-11-27 04:43:52.409626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:55.900 [2024-11-27 04:43:52.409639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:55.900 [2024-11-27 04:43:52.409653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:55.900 [2024-11-27 04:43:52.409661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:55.900 [2024-11-27 04:43:52.409796] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 339.586 ms, result 0 00:26:56.833 00:26:56.833 00:26:56.833 04:43:53 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@96 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:26:58.730 /home/vagrant/spdk_repo/spdk/test/ftl/testfile2: OK 00:26:58.730 04:43:55 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@98 -- # trap - SIGINT SIGTERM EXIT 00:26:58.730 04:43:55 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@99 -- # restore_kill 00:26:58.730 04:43:55 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@31 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:26:58.730 04:43:55 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@32 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:26:58.988 04:43:55 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@33 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:26:58.988 04:43:55 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@34 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:26:58.988 04:43:55 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@35 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:26:58.988 Process with pid 78409 is not found 00:26:58.988 04:43:55 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@37 -- # killprocess 78409 00:26:58.988 04:43:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@954 -- # '[' -z 78409 ']' 00:26:58.988 04:43:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@958 -- # kill -0 78409 00:26:58.988 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (78409) - No such process 00:26:58.988 04:43:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@981 -- # echo 'Process with pid 78409 is not found' 00:26:58.988 04:43:55 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@38 -- # rmmod nbd 00:26:59.244 Remove shared memory files 00:26:59.244 04:43:55 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@39 -- # remove_shm 00:26:59.244 04:43:55 ftl.ftl_dirty_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:26:59.244 04:43:55 ftl.ftl_dirty_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:26:59.244 04:43:55 ftl.ftl_dirty_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:26:59.244 04:43:55 ftl.ftl_dirty_shutdown -- ftl/common.sh@207 -- # rm -f rm -f 00:26:59.244 04:43:55 ftl.ftl_dirty_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:26:59.244 04:43:55 ftl.ftl_dirty_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:26:59.244 ************************************ 00:26:59.244 END TEST ftl_dirty_shutdown 00:26:59.244 ************************************ 00:26:59.244 00:26:59.244 real 2m19.380s 00:26:59.244 user 2m36.254s 00:26:59.244 sys 0m22.771s 00:26:59.244 04:43:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:59.244 04:43:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:26:59.244 04:43:55 ftl -- ftl/ftl.sh@78 -- # run_test ftl_upgrade_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:26:59.244 04:43:55 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:26:59.244 04:43:55 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:59.244 04:43:55 ftl -- common/autotest_common.sh@10 -- # set +x 00:26:59.244 ************************************ 00:26:59.244 START TEST ftl_upgrade_shutdown 00:26:59.244 ************************************ 00:26:59.244 04:43:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:26:59.244 * Looking for test storage... 00:26:59.244 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:26:59.244 04:43:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:26:59.502 04:43:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1693 -- # lcov --version 00:26:59.502 04:43:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:26:59.502 04:43:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:26:59.502 04:43:55 ftl.ftl_upgrade_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:59.502 04:43:55 ftl.ftl_upgrade_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:59.502 04:43:55 ftl.ftl_upgrade_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:59.502 04:43:55 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:26:59.502 04:43:55 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:26:59.502 04:43:55 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:26:59.502 04:43:55 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:26:59.502 04:43:55 ftl.ftl_upgrade_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:26:59.502 04:43:55 ftl.ftl_upgrade_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:26:59.502 04:43:55 ftl.ftl_upgrade_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:26:59.502 04:43:55 ftl.ftl_upgrade_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:59.502 04:43:55 ftl.ftl_upgrade_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:26:59.502 04:43:55 ftl.ftl_upgrade_shutdown -- scripts/common.sh@345 -- # : 1 00:26:59.502 04:43:55 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:59.502 04:43:55 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:59.502 04:43:55 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # decimal 1 00:26:59.502 04:43:55 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=1 00:26:59.502 04:43:55 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:59.502 04:43:55 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 1 00:26:59.502 04:43:55 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:26:59.502 04:43:55 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # decimal 2 00:26:59.502 04:43:55 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=2 00:26:59.502 04:43:55 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:59.502 04:43:55 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 2 00:26:59.502 04:43:55 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:26:59.502 04:43:55 ftl.ftl_upgrade_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:59.502 04:43:55 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:59.502 04:43:55 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # return 0 00:26:59.502 04:43:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:59.502 04:43:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:26:59.502 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:59.502 --rc genhtml_branch_coverage=1 00:26:59.502 --rc genhtml_function_coverage=1 00:26:59.502 --rc genhtml_legend=1 00:26:59.502 --rc geninfo_all_blocks=1 00:26:59.502 --rc geninfo_unexecuted_blocks=1 00:26:59.502 00:26:59.502 ' 00:26:59.502 04:43:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:26:59.502 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:59.502 --rc genhtml_branch_coverage=1 00:26:59.502 --rc genhtml_function_coverage=1 00:26:59.502 --rc genhtml_legend=1 00:26:59.502 --rc geninfo_all_blocks=1 00:26:59.502 --rc geninfo_unexecuted_blocks=1 00:26:59.502 00:26:59.502 ' 00:26:59.502 04:43:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:26:59.502 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:59.502 --rc genhtml_branch_coverage=1 00:26:59.502 --rc genhtml_function_coverage=1 00:26:59.502 --rc genhtml_legend=1 00:26:59.502 --rc geninfo_all_blocks=1 00:26:59.502 --rc geninfo_unexecuted_blocks=1 00:26:59.502 00:26:59.502 ' 00:26:59.502 04:43:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:26:59.502 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:59.502 --rc genhtml_branch_coverage=1 00:26:59.502 --rc genhtml_function_coverage=1 00:26:59.502 --rc genhtml_legend=1 00:26:59.502 --rc geninfo_all_blocks=1 00:26:59.502 --rc geninfo_unexecuted_blocks=1 00:26:59.502 00:26:59.502 ' 00:26:59.502 04:43:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:26:59.502 04:43:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 00:26:59.502 04:43:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:26:59.502 04:43:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:26:59.502 04:43:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:26:59.502 04:43:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:26:59.502 04:43:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:59.502 04:43:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:26:59.502 04:43:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:26:59.502 04:43:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:59.502 04:43:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:59.502 04:43:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:26:59.502 04:43:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:26:59.503 04:43:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:26:59.503 04:43:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:26:59.503 04:43:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:26:59.503 04:43:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:26:59.503 04:43:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:59.503 04:43:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:59.503 04:43:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:26:59.503 04:43:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:26:59.503 04:43:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:26:59.503 04:43:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:26:59.503 04:43:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:26:59.503 04:43:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:26:59.503 04:43:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:26:59.503 04:43:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:26:59.503 04:43:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:59.503 04:43:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:59.503 04:43:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@17 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:26:59.503 04:43:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # export FTL_BDEV=ftl 00:26:59.503 04:43:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # FTL_BDEV=ftl 00:26:59.503 04:43:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # export FTL_BASE=0000:00:11.0 00:26:59.503 04:43:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # FTL_BASE=0000:00:11.0 00:26:59.503 04:43:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # export FTL_BASE_SIZE=20480 00:26:59.503 04:43:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # FTL_BASE_SIZE=20480 00:26:59.503 04:43:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # export FTL_CACHE=0000:00:10.0 00:26:59.503 04:43:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # FTL_CACHE=0000:00:10.0 00:26:59.503 04:43:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # export FTL_CACHE_SIZE=5120 00:26:59.503 04:43:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # FTL_CACHE_SIZE=5120 00:26:59.503 04:43:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # export FTL_L2P_DRAM_LIMIT=2 00:26:59.503 04:43:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # FTL_L2P_DRAM_LIMIT=2 00:26:59.503 04:43:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@26 -- # tcp_target_setup 00:26:59.503 04:43:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:26:59.503 04:43:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:26:59.503 04:43:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:26:59.503 04:43:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=79995 00:26:59.503 04:43:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:26:59.503 04:43:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 79995 00:26:59.503 04:43:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 79995 ']' 00:26:59.503 04:43:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:59.503 04:43:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:59.503 04:43:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:59.503 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:59.503 04:43:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:59.503 04:43:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:26:59.503 04:43:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' 00:26:59.503 [2024-11-27 04:43:55.990256] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:26:59.503 [2024-11-27 04:43:55.990373] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79995 ] 00:26:59.761 [2024-11-27 04:43:56.147937] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:59.761 [2024-11-27 04:43:56.243568] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:00.327 04:43:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:00.327 04:43:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:27:00.327 04:43:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:27:00.327 04:43:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # params=('FTL_BDEV' 'FTL_BASE' 'FTL_BASE_SIZE' 'FTL_CACHE' 'FTL_CACHE_SIZE' 'FTL_L2P_DRAM_LIMIT') 00:27:00.327 04:43:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # local params 00:27:00.327 04:43:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:27:00.327 04:43:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z ftl ]] 00:27:00.327 04:43:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:27:00.327 04:43:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:11.0 ]] 00:27:00.327 04:43:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:27:00.327 04:43:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 20480 ]] 00:27:00.327 04:43:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:27:00.327 04:43:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:10.0 ]] 00:27:00.327 04:43:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:27:00.327 04:43:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 5120 ]] 00:27:00.327 04:43:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:27:00.327 04:43:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 2 ]] 00:27:00.327 04:43:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # create_base_bdev base 0000:00:11.0 20480 00:27:00.327 04:43:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@54 -- # local name=base 00:27:00.327 04:43:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:27:00.327 04:43:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@56 -- # local size=20480 00:27:00.327 04:43:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:27:00.327 04:43:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b base -t PCIe -a 0000:00:11.0 00:27:00.586 04:43:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # base_bdev=basen1 00:27:00.586 04:43:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@62 -- # local base_size 00:27:00.586 04:43:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # get_bdev_size basen1 00:27:00.586 04:43:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=basen1 00:27:00.586 04:43:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:27:00.586 04:43:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:27:00.586 04:43:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:27:00.586 04:43:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b basen1 00:27:00.845 04:43:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:27:00.845 { 00:27:00.845 "name": "basen1", 00:27:00.845 "aliases": [ 00:27:00.845 "5cdd1bc0-9344-49c9-94f4-3f50c938383b" 00:27:00.845 ], 00:27:00.845 "product_name": "NVMe disk", 00:27:00.845 "block_size": 4096, 00:27:00.845 "num_blocks": 1310720, 00:27:00.845 "uuid": "5cdd1bc0-9344-49c9-94f4-3f50c938383b", 00:27:00.845 "numa_id": -1, 00:27:00.845 "assigned_rate_limits": { 00:27:00.845 "rw_ios_per_sec": 0, 00:27:00.845 "rw_mbytes_per_sec": 0, 00:27:00.845 "r_mbytes_per_sec": 0, 00:27:00.845 "w_mbytes_per_sec": 0 00:27:00.845 }, 00:27:00.845 "claimed": true, 00:27:00.845 "claim_type": "read_many_write_one", 00:27:00.845 "zoned": false, 00:27:00.845 "supported_io_types": { 00:27:00.845 "read": true, 00:27:00.845 "write": true, 00:27:00.845 "unmap": true, 00:27:00.845 "flush": true, 00:27:00.845 "reset": true, 00:27:00.845 "nvme_admin": true, 00:27:00.845 "nvme_io": true, 00:27:00.845 "nvme_io_md": false, 00:27:00.845 "write_zeroes": true, 00:27:00.845 "zcopy": false, 00:27:00.845 "get_zone_info": false, 00:27:00.845 "zone_management": false, 00:27:00.845 "zone_append": false, 00:27:00.845 "compare": true, 00:27:00.845 "compare_and_write": false, 00:27:00.845 "abort": true, 00:27:00.845 "seek_hole": false, 00:27:00.845 "seek_data": false, 00:27:00.845 "copy": true, 00:27:00.845 "nvme_iov_md": false 00:27:00.845 }, 00:27:00.845 "driver_specific": { 00:27:00.845 "nvme": [ 00:27:00.845 { 00:27:00.845 "pci_address": "0000:00:11.0", 00:27:00.845 "trid": { 00:27:00.845 "trtype": "PCIe", 00:27:00.845 "traddr": "0000:00:11.0" 00:27:00.845 }, 00:27:00.845 "ctrlr_data": { 00:27:00.845 "cntlid": 0, 00:27:00.845 "vendor_id": "0x1b36", 00:27:00.845 "model_number": "QEMU NVMe Ctrl", 00:27:00.845 "serial_number": "12341", 00:27:00.845 "firmware_revision": "8.0.0", 00:27:00.845 "subnqn": "nqn.2019-08.org.qemu:12341", 00:27:00.845 "oacs": { 00:27:00.845 "security": 0, 00:27:00.845 "format": 1, 00:27:00.845 "firmware": 0, 00:27:00.845 "ns_manage": 1 00:27:00.845 }, 00:27:00.845 "multi_ctrlr": false, 00:27:00.845 "ana_reporting": false 00:27:00.845 }, 00:27:00.845 "vs": { 00:27:00.845 "nvme_version": "1.4" 00:27:00.845 }, 00:27:00.845 "ns_data": { 00:27:00.845 "id": 1, 00:27:00.845 "can_share": false 00:27:00.845 } 00:27:00.845 } 00:27:00.845 ], 00:27:00.845 "mp_policy": "active_passive" 00:27:00.845 } 00:27:00.845 } 00:27:00.845 ]' 00:27:00.845 04:43:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:27:00.845 04:43:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:27:00.845 04:43:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:27:00.845 04:43:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # nb=1310720 00:27:00.845 04:43:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:27:00.845 04:43:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1392 -- # echo 5120 00:27:00.845 04:43:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:27:00.845 04:43:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@64 -- # [[ 20480 -le 5120 ]] 00:27:00.845 04:43:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:27:00.845 04:43:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:27:00.845 04:43:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:27:01.103 04:43:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # stores=917fb7f0-02d0-4a2a-a526-73e5d8d4696a 00:27:01.103 04:43:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:27:01.103 04:43:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 917fb7f0-02d0-4a2a-a526-73e5d8d4696a 00:27:01.362 04:43:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore basen1 lvs 00:27:01.620 04:43:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # lvs=c1fb411d-2f61-4a10-aaf9-bf82de356247 00:27:01.620 04:43:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create basen1p0 20480 -t -u c1fb411d-2f61-4a10-aaf9-bf82de356247 00:27:01.877 04:43:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # base_bdev=992cd4d8-abf9-4fca-b8ab-5b6797daae67 00:27:01.877 04:43:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@108 -- # [[ -z 992cd4d8-abf9-4fca-b8ab-5b6797daae67 ]] 00:27:01.877 04:43:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # create_nv_cache_bdev cache 0000:00:10.0 992cd4d8-abf9-4fca-b8ab-5b6797daae67 5120 00:27:01.877 04:43:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@35 -- # local name=cache 00:27:01.877 04:43:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:27:01.877 04:43:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@37 -- # local base_bdev=992cd4d8-abf9-4fca-b8ab-5b6797daae67 00:27:01.877 04:43:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@38 -- # local cache_size=5120 00:27:01.878 04:43:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # get_bdev_size 992cd4d8-abf9-4fca-b8ab-5b6797daae67 00:27:01.878 04:43:58 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=992cd4d8-abf9-4fca-b8ab-5b6797daae67 00:27:01.878 04:43:58 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:27:01.878 04:43:58 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:27:01.878 04:43:58 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:27:01.878 04:43:58 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 992cd4d8-abf9-4fca-b8ab-5b6797daae67 00:27:01.878 04:43:58 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:27:01.878 { 00:27:01.878 "name": "992cd4d8-abf9-4fca-b8ab-5b6797daae67", 00:27:01.878 "aliases": [ 00:27:01.878 "lvs/basen1p0" 00:27:01.878 ], 00:27:01.878 "product_name": "Logical Volume", 00:27:01.878 "block_size": 4096, 00:27:01.878 "num_blocks": 5242880, 00:27:01.878 "uuid": "992cd4d8-abf9-4fca-b8ab-5b6797daae67", 00:27:01.878 "assigned_rate_limits": { 00:27:01.878 "rw_ios_per_sec": 0, 00:27:01.878 "rw_mbytes_per_sec": 0, 00:27:01.878 "r_mbytes_per_sec": 0, 00:27:01.878 "w_mbytes_per_sec": 0 00:27:01.878 }, 00:27:01.878 "claimed": false, 00:27:01.878 "zoned": false, 00:27:01.878 "supported_io_types": { 00:27:01.878 "read": true, 00:27:01.878 "write": true, 00:27:01.878 "unmap": true, 00:27:01.878 "flush": false, 00:27:01.878 "reset": true, 00:27:01.878 "nvme_admin": false, 00:27:01.878 "nvme_io": false, 00:27:01.878 "nvme_io_md": false, 00:27:01.878 "write_zeroes": true, 00:27:01.878 "zcopy": false, 00:27:01.878 "get_zone_info": false, 00:27:01.878 "zone_management": false, 00:27:01.878 "zone_append": false, 00:27:01.878 "compare": false, 00:27:01.878 "compare_and_write": false, 00:27:01.878 "abort": false, 00:27:01.878 "seek_hole": true, 00:27:01.878 "seek_data": true, 00:27:01.878 "copy": false, 00:27:01.878 "nvme_iov_md": false 00:27:01.878 }, 00:27:01.878 "driver_specific": { 00:27:01.878 "lvol": { 00:27:01.878 "lvol_store_uuid": "c1fb411d-2f61-4a10-aaf9-bf82de356247", 00:27:01.878 "base_bdev": "basen1", 00:27:01.878 "thin_provision": true, 00:27:01.878 "num_allocated_clusters": 0, 00:27:01.878 "snapshot": false, 00:27:01.878 "clone": false, 00:27:01.878 "esnap_clone": false 00:27:01.878 } 00:27:01.878 } 00:27:01.878 } 00:27:01.878 ]' 00:27:01.878 04:43:58 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:27:01.878 04:43:58 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:27:01.878 04:43:58 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:27:02.158 04:43:58 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # nb=5242880 00:27:02.158 04:43:58 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=20480 00:27:02.158 04:43:58 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1392 -- # echo 20480 00:27:02.158 04:43:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # local base_size=1024 00:27:02.158 04:43:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:27:02.158 04:43:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b cache -t PCIe -a 0000:00:10.0 00:27:02.417 04:43:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # nvc_bdev=cachen1 00:27:02.417 04:43:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@47 -- # [[ -z 5120 ]] 00:27:02.417 04:43:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create cachen1 -s 5120 1 00:27:02.676 04:43:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # cache_bdev=cachen1p0 00:27:02.676 04:43:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@114 -- # [[ -z cachen1p0 ]] 00:27:02.676 04:43:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@119 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 60 bdev_ftl_create -b ftl -d 992cd4d8-abf9-4fca-b8ab-5b6797daae67 -c cachen1p0 --l2p_dram_limit 2 00:27:02.676 [2024-11-27 04:43:59.221292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:02.676 [2024-11-27 04:43:59.221331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:27:02.676 [2024-11-27 04:43:59.221344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:27:02.676 [2024-11-27 04:43:59.221350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:02.676 [2024-11-27 04:43:59.221397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:02.676 [2024-11-27 04:43:59.221404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:27:02.676 [2024-11-27 04:43:59.221412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.031 ms 00:27:02.676 [2024-11-27 04:43:59.221418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:02.676 [2024-11-27 04:43:59.221435] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:27:02.676 [2024-11-27 04:43:59.222002] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:27:02.676 [2024-11-27 04:43:59.222017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:02.676 [2024-11-27 04:43:59.222024] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:27:02.676 [2024-11-27 04:43:59.222032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.585 ms 00:27:02.676 [2024-11-27 04:43:59.222037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:02.676 [2024-11-27 04:43:59.222089] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl] Create new FTL, UUID e710666e-e67e-47bb-9c51-2525cd36c221 00:27:02.676 [2024-11-27 04:43:59.223051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:02.676 [2024-11-27 04:43:59.223076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Default-initialize superblock 00:27:02.676 [2024-11-27 04:43:59.223083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.020 ms 00:27:02.676 [2024-11-27 04:43:59.223090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:02.676 [2024-11-27 04:43:59.227710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:02.676 [2024-11-27 04:43:59.227743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:27:02.676 [2024-11-27 04:43:59.227750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.587 ms 00:27:02.676 [2024-11-27 04:43:59.227758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:02.676 [2024-11-27 04:43:59.227787] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:02.676 [2024-11-27 04:43:59.227796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:27:02.676 [2024-11-27 04:43:59.227802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:27:02.676 [2024-11-27 04:43:59.227811] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:02.676 [2024-11-27 04:43:59.227851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:02.676 [2024-11-27 04:43:59.227860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:27:02.676 [2024-11-27 04:43:59.227868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:27:02.676 [2024-11-27 04:43:59.227876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:02.676 [2024-11-27 04:43:59.227893] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:27:02.676 [2024-11-27 04:43:59.230749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:02.676 [2024-11-27 04:43:59.230769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:27:02.676 [2024-11-27 04:43:59.230779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.859 ms 00:27:02.676 [2024-11-27 04:43:59.230785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:02.676 [2024-11-27 04:43:59.230806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:02.676 [2024-11-27 04:43:59.230812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:27:02.676 [2024-11-27 04:43:59.230819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:27:02.676 [2024-11-27 04:43:59.230825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:02.676 [2024-11-27 04:43:59.230838] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 1 00:27:02.676 [2024-11-27 04:43:59.230955] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:27:02.676 [2024-11-27 04:43:59.230973] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:27:02.676 [2024-11-27 04:43:59.230981] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:27:02.676 [2024-11-27 04:43:59.230991] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:27:02.676 [2024-11-27 04:43:59.230997] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:27:02.676 [2024-11-27 04:43:59.231005] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:27:02.676 [2024-11-27 04:43:59.231012] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:27:02.677 [2024-11-27 04:43:59.231019] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:27:02.677 [2024-11-27 04:43:59.231024] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:27:02.677 [2024-11-27 04:43:59.231032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:02.677 [2024-11-27 04:43:59.231037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:27:02.677 [2024-11-27 04:43:59.231045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.194 ms 00:27:02.677 [2024-11-27 04:43:59.231050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:02.677 [2024-11-27 04:43:59.231115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:02.677 [2024-11-27 04:43:59.231127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:27:02.677 [2024-11-27 04:43:59.231133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.051 ms 00:27:02.677 [2024-11-27 04:43:59.231138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:02.677 [2024-11-27 04:43:59.231218] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:27:02.677 [2024-11-27 04:43:59.231224] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:27:02.677 [2024-11-27 04:43:59.231232] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:27:02.677 [2024-11-27 04:43:59.231238] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:02.677 [2024-11-27 04:43:59.231245] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:27:02.677 [2024-11-27 04:43:59.231250] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:27:02.677 [2024-11-27 04:43:59.231256] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:27:02.677 [2024-11-27 04:43:59.231261] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:27:02.677 [2024-11-27 04:43:59.231267] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:27:02.677 [2024-11-27 04:43:59.231273] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:02.677 [2024-11-27 04:43:59.231279] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:27:02.677 [2024-11-27 04:43:59.231284] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:27:02.677 [2024-11-27 04:43:59.231290] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:02.677 [2024-11-27 04:43:59.231295] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:27:02.677 [2024-11-27 04:43:59.231301] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:27:02.677 [2024-11-27 04:43:59.231306] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:02.677 [2024-11-27 04:43:59.231314] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:27:02.677 [2024-11-27 04:43:59.231318] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:27:02.677 [2024-11-27 04:43:59.231326] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:02.677 [2024-11-27 04:43:59.231332] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:27:02.677 [2024-11-27 04:43:59.231339] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:27:02.677 [2024-11-27 04:43:59.231344] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:02.677 [2024-11-27 04:43:59.231350] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:27:02.677 [2024-11-27 04:43:59.231355] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:27:02.677 [2024-11-27 04:43:59.231361] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:02.677 [2024-11-27 04:43:59.231365] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:27:02.677 [2024-11-27 04:43:59.231372] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:27:02.677 [2024-11-27 04:43:59.231377] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:02.677 [2024-11-27 04:43:59.231383] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:27:02.677 [2024-11-27 04:43:59.231388] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:27:02.677 [2024-11-27 04:43:59.231394] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:02.677 [2024-11-27 04:43:59.231399] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:27:02.677 [2024-11-27 04:43:59.231407] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:27:02.677 [2024-11-27 04:43:59.231411] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:02.677 [2024-11-27 04:43:59.231418] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:27:02.677 [2024-11-27 04:43:59.231422] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:27:02.677 [2024-11-27 04:43:59.231428] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:02.677 [2024-11-27 04:43:59.231433] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:27:02.677 [2024-11-27 04:43:59.231439] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:27:02.677 [2024-11-27 04:43:59.231444] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:02.677 [2024-11-27 04:43:59.231450] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:27:02.677 [2024-11-27 04:43:59.231455] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:27:02.677 [2024-11-27 04:43:59.231461] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:02.677 [2024-11-27 04:43:59.231466] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:27:02.677 [2024-11-27 04:43:59.231473] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:27:02.677 [2024-11-27 04:43:59.231478] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:27:02.677 [2024-11-27 04:43:59.231486] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:02.677 [2024-11-27 04:43:59.231491] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:27:02.677 [2024-11-27 04:43:59.231499] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:27:02.677 [2024-11-27 04:43:59.231504] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:27:02.677 [2024-11-27 04:43:59.231511] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:27:02.677 [2024-11-27 04:43:59.231516] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:27:02.677 [2024-11-27 04:43:59.231522] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:27:02.677 [2024-11-27 04:43:59.231530] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:27:02.677 [2024-11-27 04:43:59.231540] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:02.677 [2024-11-27 04:43:59.231546] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:27:02.677 [2024-11-27 04:43:59.231553] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:27:02.677 [2024-11-27 04:43:59.231558] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:27:02.677 [2024-11-27 04:43:59.231565] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:27:02.677 [2024-11-27 04:43:59.231570] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:27:02.677 [2024-11-27 04:43:59.231577] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:27:02.677 [2024-11-27 04:43:59.231582] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:27:02.677 [2024-11-27 04:43:59.231589] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:27:02.677 [2024-11-27 04:43:59.231594] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:27:02.677 [2024-11-27 04:43:59.231602] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:27:02.677 [2024-11-27 04:43:59.231607] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:27:02.677 [2024-11-27 04:43:59.231613] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:27:02.677 [2024-11-27 04:43:59.231619] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:27:02.677 [2024-11-27 04:43:59.231626] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:27:02.677 [2024-11-27 04:43:59.231632] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:27:02.677 [2024-11-27 04:43:59.231639] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:02.677 [2024-11-27 04:43:59.231645] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:02.677 [2024-11-27 04:43:59.231651] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:27:02.677 [2024-11-27 04:43:59.231657] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:27:02.677 [2024-11-27 04:43:59.231663] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:27:02.677 [2024-11-27 04:43:59.231669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:02.677 [2024-11-27 04:43:59.231675] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:27:02.677 [2024-11-27 04:43:59.231681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.504 ms 00:27:02.677 [2024-11-27 04:43:59.231687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:02.677 [2024-11-27 04:43:59.231715] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:27:02.677 [2024-11-27 04:43:59.231735] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:27:05.208 [2024-11-27 04:44:01.330883] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:05.208 [2024-11-27 04:44:01.330936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:27:05.208 [2024-11-27 04:44:01.330950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2099.158 ms 00:27:05.208 [2024-11-27 04:44:01.330960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:05.208 [2024-11-27 04:44:01.355978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:05.208 [2024-11-27 04:44:01.356023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:27:05.208 [2024-11-27 04:44:01.356034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 24.816 ms 00:27:05.208 [2024-11-27 04:44:01.356043] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:05.208 [2024-11-27 04:44:01.356122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:05.208 [2024-11-27 04:44:01.356134] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:27:05.208 [2024-11-27 04:44:01.356143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:27:05.208 [2024-11-27 04:44:01.356156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:05.208 [2024-11-27 04:44:01.386140] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:05.208 [2024-11-27 04:44:01.386181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:27:05.208 [2024-11-27 04:44:01.386192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 29.950 ms 00:27:05.208 [2024-11-27 04:44:01.386201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:05.208 [2024-11-27 04:44:01.386237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:05.208 [2024-11-27 04:44:01.386247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:27:05.208 [2024-11-27 04:44:01.386255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:27:05.208 [2024-11-27 04:44:01.386264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:05.208 [2024-11-27 04:44:01.386617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:05.208 [2024-11-27 04:44:01.386642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:27:05.208 [2024-11-27 04:44:01.386657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.290 ms 00:27:05.208 [2024-11-27 04:44:01.386666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:05.208 [2024-11-27 04:44:01.386704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:05.208 [2024-11-27 04:44:01.386716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:27:05.208 [2024-11-27 04:44:01.386735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.019 ms 00:27:05.208 [2024-11-27 04:44:01.386746] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:05.208 [2024-11-27 04:44:01.400525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:05.208 [2024-11-27 04:44:01.400558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:27:05.208 [2024-11-27 04:44:01.400568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.762 ms 00:27:05.208 [2024-11-27 04:44:01.400577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:05.208 [2024-11-27 04:44:01.423938] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:27:05.208 [2024-11-27 04:44:01.424779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:05.208 [2024-11-27 04:44:01.424803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:27:05.208 [2024-11-27 04:44:01.424817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 24.127 ms 00:27:05.208 [2024-11-27 04:44:01.424825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:05.208 [2024-11-27 04:44:01.445610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:05.208 [2024-11-27 04:44:01.445651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear L2P 00:27:05.208 [2024-11-27 04:44:01.445666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 20.746 ms 00:27:05.208 [2024-11-27 04:44:01.445675] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:05.208 [2024-11-27 04:44:01.445769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:05.208 [2024-11-27 04:44:01.445781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:27:05.208 [2024-11-27 04:44:01.445793] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.057 ms 00:27:05.208 [2024-11-27 04:44:01.445800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:05.208 [2024-11-27 04:44:01.467854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:05.208 [2024-11-27 04:44:01.467889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial band info metadata 00:27:05.208 [2024-11-27 04:44:01.467903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 22.004 ms 00:27:05.208 [2024-11-27 04:44:01.467911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:05.208 [2024-11-27 04:44:01.489812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:05.208 [2024-11-27 04:44:01.489845] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial chunk info metadata 00:27:05.208 [2024-11-27 04:44:01.489858] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 21.859 ms 00:27:05.208 [2024-11-27 04:44:01.489866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:05.208 [2024-11-27 04:44:01.490417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:05.208 [2024-11-27 04:44:01.490433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:27:05.208 [2024-11-27 04:44:01.490446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.517 ms 00:27:05.208 [2024-11-27 04:44:01.490453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:05.208 [2024-11-27 04:44:01.556889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:05.208 [2024-11-27 04:44:01.556929] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Wipe P2L region 00:27:05.208 [2024-11-27 04:44:01.556946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 66.381 ms 00:27:05.208 [2024-11-27 04:44:01.556955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:05.208 [2024-11-27 04:44:01.580543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:05.208 [2024-11-27 04:44:01.580578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim map 00:27:05.208 [2024-11-27 04:44:01.580591] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 23.511 ms 00:27:05.208 [2024-11-27 04:44:01.580600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:05.208 [2024-11-27 04:44:01.603508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:05.208 [2024-11-27 04:44:01.603539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim log 00:27:05.208 [2024-11-27 04:44:01.603552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 22.870 ms 00:27:05.208 [2024-11-27 04:44:01.603560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:05.208 [2024-11-27 04:44:01.626146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:05.208 [2024-11-27 04:44:01.626176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:27:05.208 [2024-11-27 04:44:01.626188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 22.549 ms 00:27:05.208 [2024-11-27 04:44:01.626196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:05.208 [2024-11-27 04:44:01.626235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:05.208 [2024-11-27 04:44:01.626245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:27:05.208 [2024-11-27 04:44:01.626257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:27:05.208 [2024-11-27 04:44:01.626267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:05.208 [2024-11-27 04:44:01.626341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:05.208 [2024-11-27 04:44:01.626352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:27:05.208 [2024-11-27 04:44:01.626363] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.033 ms 00:27:05.208 [2024-11-27 04:44:01.626371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:05.208 [2024-11-27 04:44:01.627194] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 2405.500 ms, result 0 00:27:05.208 { 00:27:05.208 "name": "ftl", 00:27:05.208 "uuid": "e710666e-e67e-47bb-9c51-2525cd36c221" 00:27:05.208 } 00:27:05.208 04:44:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype TCP 00:27:05.509 [2024-11-27 04:44:01.838630] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:05.509 04:44:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2018-09.io.spdk:cnode0 -a -m 1 00:27:05.509 04:44:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2018-09.io.spdk:cnode0 ftl 00:27:05.765 [2024-11-27 04:44:02.247057] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:27:05.765 04:44:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2018-09.io.spdk:cnode0 -t TCP -f ipv4 -s 4420 -a 127.0.0.1 00:27:06.023 [2024-11-27 04:44:02.447403] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:27:06.023 04:44:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:27:06.282 04:44:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@28 -- # size=1073741824 00:27:06.282 04:44:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@29 -- # seek=0 00:27:06.282 04:44:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@30 -- # skip=0 00:27:06.282 04:44:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@31 -- # bs=1048576 00:27:06.282 04:44:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@32 -- # count=1024 00:27:06.282 04:44:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@33 -- # iterations=2 00:27:06.282 04:44:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@34 -- # qd=2 00:27:06.282 04:44:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@35 -- # sums=() 00:27:06.282 04:44:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i = 0 )) 00:27:06.282 04:44:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:27:06.282 Fill FTL, iteration 1 00:27:06.282 04:44:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 1' 00:27:06.282 04:44:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:27:06.282 04:44:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:06.282 04:44:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:06.282 04:44:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:06.282 04:44:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@157 -- # [[ -z ftl ]] 00:27:06.282 04:44:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@163 -- # spdk_ini_pid=80100 00:27:06.282 04:44:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@164 -- # export spdk_ini_pid 00:27:06.282 04:44:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@165 -- # waitforlisten 80100 /var/tmp/spdk.tgt.sock 00:27:06.282 04:44:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@162 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock 00:27:06.282 04:44:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 80100 ']' 00:27:06.282 04:44:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.tgt.sock 00:27:06.282 04:44:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:06.282 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock... 00:27:06.282 04:44:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock...' 00:27:06.282 04:44:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:06.282 04:44:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:06.539 [2024-11-27 04:44:02.882808] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:27:06.539 [2024-11-27 04:44:02.882921] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80100 ] 00:27:06.539 [2024-11-27 04:44:03.043185] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:06.798 [2024-11-27 04:44:03.141493] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:07.364 04:44:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:07.364 04:44:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:27:07.364 04:44:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@167 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock bdev_nvme_attach_controller -b ftl -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2018-09.io.spdk:cnode0 00:27:07.621 ftln1 00:27:07.621 04:44:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@171 -- # echo '{"subsystems": [' 00:27:07.621 04:44:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@172 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock save_subsystem_config -n bdev 00:27:07.621 04:44:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@173 -- # echo ']}' 00:27:07.621 04:44:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@176 -- # killprocess 80100 00:27:07.621 04:44:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 80100 ']' 00:27:07.621 04:44:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 80100 00:27:07.621 04:44:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:27:07.621 04:44:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:07.621 04:44:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80100 00:27:07.880 04:44:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:07.880 04:44:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:07.880 killing process with pid 80100 00:27:07.880 04:44:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80100' 00:27:07.880 04:44:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 80100 00:27:07.880 04:44:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 80100 00:27:09.269 04:44:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@177 -- # unset spdk_ini_pid 00:27:09.269 04:44:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:27:09.269 [2024-11-27 04:44:05.716782] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:27:09.269 [2024-11-27 04:44:05.717079] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80142 ] 00:27:09.527 [2024-11-27 04:44:05.876198] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:09.527 [2024-11-27 04:44:05.970272] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:10.901  [2024-11-27T04:44:08.421Z] Copying: 256/1024 [MB] (256 MBps) [2024-11-27T04:44:09.392Z] Copying: 528/1024 [MB] (272 MBps) [2024-11-27T04:44:10.322Z] Copying: 807/1024 [MB] (279 MBps) [2024-11-27T04:44:10.886Z] Copying: 1024/1024 [MB] (average 271 MBps) 00:27:14.299 00:27:14.299 04:44:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=1024 00:27:14.299 04:44:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 1' 00:27:14.299 Calculate MD5 checksum, iteration 1 00:27:14.300 04:44:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:27:14.300 04:44:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:14.300 04:44:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:14.300 04:44:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:14.300 04:44:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:27:14.300 04:44:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:27:14.300 [2024-11-27 04:44:10.743231] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:27:14.300 [2024-11-27 04:44:10.743345] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80200 ] 00:27:14.557 [2024-11-27 04:44:10.898574] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:14.557 [2024-11-27 04:44:10.981660] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:15.928  [2024-11-27T04:44:12.842Z] Copying: 697/1024 [MB] (697 MBps) [2024-11-27T04:44:13.425Z] Copying: 1024/1024 [MB] (average 696 MBps) 00:27:16.838 00:27:16.838 04:44:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=1024 00:27:16.838 04:44:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:27:19.365 04:44:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:27:19.365 Fill FTL, iteration 2 00:27:19.365 04:44:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=44e60ff2755ee7aa951131185392f4f4 00:27:19.365 04:44:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:27:19.365 04:44:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:27:19.365 04:44:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 2' 00:27:19.365 04:44:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:27:19.365 04:44:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:19.365 04:44:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:19.365 04:44:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:19.365 04:44:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:27:19.365 04:44:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:27:19.365 [2024-11-27 04:44:15.435525] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:27:19.365 [2024-11-27 04:44:15.435627] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80256 ] 00:27:19.365 [2024-11-27 04:44:15.580360] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:19.365 [2024-11-27 04:44:15.663396] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:20.750  [2024-11-27T04:44:18.270Z] Copying: 264/1024 [MB] (264 MBps) [2024-11-27T04:44:19.201Z] Copying: 539/1024 [MB] (275 MBps) [2024-11-27T04:44:20.133Z] Copying: 815/1024 [MB] (276 MBps) [2024-11-27T04:44:20.698Z] Copying: 1024/1024 [MB] (average 266 MBps) 00:27:24.111 00:27:24.111 04:44:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=2048 00:27:24.111 Calculate MD5 checksum, iteration 2 00:27:24.111 04:44:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 2' 00:27:24.111 04:44:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:27:24.111 04:44:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:24.111 04:44:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:24.111 04:44:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:24.111 04:44:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:27:24.111 04:44:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:27:24.111 [2024-11-27 04:44:20.499496] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:27:24.111 [2024-11-27 04:44:20.499617] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80312 ] 00:27:24.111 [2024-11-27 04:44:20.654531] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:24.404 [2024-11-27 04:44:20.738643] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:25.777  [2024-11-27T04:44:22.622Z] Copying: 716/1024 [MB] (716 MBps) [2024-11-27T04:44:23.557Z] Copying: 1024/1024 [MB] (average 706 MBps) 00:27:26.970 00:27:26.970 04:44:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=2048 00:27:26.970 04:44:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:27:29.500 04:44:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:27:29.500 04:44:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=431355f9c130cd249fae20154bcb4684 00:27:29.500 04:44:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:27:29.500 04:44:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:27:29.500 04:44:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:27:29.500 [2024-11-27 04:44:25.724704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:29.500 [2024-11-27 04:44:25.724755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:27:29.500 [2024-11-27 04:44:25.724768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:27:29.500 [2024-11-27 04:44:25.724775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:29.500 [2024-11-27 04:44:25.724793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:29.500 [2024-11-27 04:44:25.724802] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:27:29.500 [2024-11-27 04:44:25.724809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:27:29.500 [2024-11-27 04:44:25.724815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:29.500 [2024-11-27 04:44:25.724831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:29.500 [2024-11-27 04:44:25.724837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:27:29.500 [2024-11-27 04:44:25.724844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:27:29.500 [2024-11-27 04:44:25.724849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:29.500 [2024-11-27 04:44:25.724897] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.184 ms, result 0 00:27:29.500 true 00:27:29.500 04:44:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:27:29.500 { 00:27:29.500 "name": "ftl", 00:27:29.500 "properties": [ 00:27:29.500 { 00:27:29.500 "name": "superblock_version", 00:27:29.500 "value": 5, 00:27:29.500 "read-only": true 00:27:29.500 }, 00:27:29.500 { 00:27:29.500 "name": "base_device", 00:27:29.500 "bands": [ 00:27:29.500 { 00:27:29.500 "id": 0, 00:27:29.500 "state": "FREE", 00:27:29.500 "validity": 0.0 00:27:29.500 }, 00:27:29.500 { 00:27:29.500 "id": 1, 00:27:29.500 "state": "FREE", 00:27:29.500 "validity": 0.0 00:27:29.500 }, 00:27:29.500 { 00:27:29.500 "id": 2, 00:27:29.500 "state": "FREE", 00:27:29.500 "validity": 0.0 00:27:29.500 }, 00:27:29.500 { 00:27:29.500 "id": 3, 00:27:29.500 "state": "FREE", 00:27:29.500 "validity": 0.0 00:27:29.500 }, 00:27:29.500 { 00:27:29.500 "id": 4, 00:27:29.500 "state": "FREE", 00:27:29.500 "validity": 0.0 00:27:29.500 }, 00:27:29.500 { 00:27:29.500 "id": 5, 00:27:29.500 "state": "FREE", 00:27:29.500 "validity": 0.0 00:27:29.500 }, 00:27:29.500 { 00:27:29.500 "id": 6, 00:27:29.500 "state": "FREE", 00:27:29.500 "validity": 0.0 00:27:29.500 }, 00:27:29.500 { 00:27:29.500 "id": 7, 00:27:29.500 "state": "FREE", 00:27:29.500 "validity": 0.0 00:27:29.500 }, 00:27:29.500 { 00:27:29.500 "id": 8, 00:27:29.500 "state": "FREE", 00:27:29.500 "validity": 0.0 00:27:29.500 }, 00:27:29.500 { 00:27:29.500 "id": 9, 00:27:29.500 "state": "FREE", 00:27:29.500 "validity": 0.0 00:27:29.500 }, 00:27:29.500 { 00:27:29.500 "id": 10, 00:27:29.500 "state": "FREE", 00:27:29.500 "validity": 0.0 00:27:29.500 }, 00:27:29.500 { 00:27:29.500 "id": 11, 00:27:29.500 "state": "FREE", 00:27:29.500 "validity": 0.0 00:27:29.500 }, 00:27:29.500 { 00:27:29.500 "id": 12, 00:27:29.500 "state": "FREE", 00:27:29.500 "validity": 0.0 00:27:29.500 }, 00:27:29.500 { 00:27:29.500 "id": 13, 00:27:29.500 "state": "FREE", 00:27:29.500 "validity": 0.0 00:27:29.500 }, 00:27:29.500 { 00:27:29.500 "id": 14, 00:27:29.500 "state": "FREE", 00:27:29.500 "validity": 0.0 00:27:29.500 }, 00:27:29.500 { 00:27:29.500 "id": 15, 00:27:29.500 "state": "FREE", 00:27:29.500 "validity": 0.0 00:27:29.500 }, 00:27:29.500 { 00:27:29.500 "id": 16, 00:27:29.500 "state": "FREE", 00:27:29.500 "validity": 0.0 00:27:29.500 }, 00:27:29.500 { 00:27:29.500 "id": 17, 00:27:29.500 "state": "FREE", 00:27:29.500 "validity": 0.0 00:27:29.500 } 00:27:29.500 ], 00:27:29.500 "read-only": true 00:27:29.500 }, 00:27:29.500 { 00:27:29.500 "name": "cache_device", 00:27:29.500 "type": "bdev", 00:27:29.500 "chunks": [ 00:27:29.500 { 00:27:29.500 "id": 0, 00:27:29.500 "state": "INACTIVE", 00:27:29.500 "utilization": 0.0 00:27:29.500 }, 00:27:29.500 { 00:27:29.500 "id": 1, 00:27:29.500 "state": "CLOSED", 00:27:29.500 "utilization": 1.0 00:27:29.500 }, 00:27:29.500 { 00:27:29.500 "id": 2, 00:27:29.500 "state": "CLOSED", 00:27:29.500 "utilization": 1.0 00:27:29.500 }, 00:27:29.500 { 00:27:29.500 "id": 3, 00:27:29.500 "state": "OPEN", 00:27:29.500 "utilization": 0.001953125 00:27:29.500 }, 00:27:29.500 { 00:27:29.500 "id": 4, 00:27:29.500 "state": "OPEN", 00:27:29.500 "utilization": 0.0 00:27:29.500 } 00:27:29.500 ], 00:27:29.500 "read-only": true 00:27:29.500 }, 00:27:29.500 { 00:27:29.500 "name": "verbose_mode", 00:27:29.500 "value": true, 00:27:29.500 "unit": "", 00:27:29.500 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:27:29.500 }, 00:27:29.500 { 00:27:29.500 "name": "prep_upgrade_on_shutdown", 00:27:29.500 "value": false, 00:27:29.500 "unit": "", 00:27:29.500 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:27:29.500 } 00:27:29.500 ] 00:27:29.500 } 00:27:29.500 04:44:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p prep_upgrade_on_shutdown -v true 00:27:29.758 [2024-11-27 04:44:26.117039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:29.758 [2024-11-27 04:44:26.117085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:27:29.758 [2024-11-27 04:44:26.117096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:27:29.758 [2024-11-27 04:44:26.117103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:29.758 [2024-11-27 04:44:26.117120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:29.758 [2024-11-27 04:44:26.117127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:27:29.758 [2024-11-27 04:44:26.117133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:27:29.758 [2024-11-27 04:44:26.117139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:29.758 [2024-11-27 04:44:26.117154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:29.758 [2024-11-27 04:44:26.117160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:27:29.758 [2024-11-27 04:44:26.117166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:27:29.758 [2024-11-27 04:44:26.117172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:29.758 [2024-11-27 04:44:26.117217] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.171 ms, result 0 00:27:29.758 true 00:27:29.758 04:44:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # ftl_get_properties 00:27:29.758 04:44:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:27:29.758 04:44:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:27:30.016 04:44:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # used=3 00:27:30.016 04:44:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@64 -- # [[ 3 -eq 0 ]] 00:27:30.016 04:44:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:27:30.016 [2024-11-27 04:44:26.545468] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:30.016 [2024-11-27 04:44:26.545516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:27:30.016 [2024-11-27 04:44:26.545528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:27:30.016 [2024-11-27 04:44:26.545534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:30.016 [2024-11-27 04:44:26.545551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:30.016 [2024-11-27 04:44:26.545558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:27:30.016 [2024-11-27 04:44:26.545564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:27:30.016 [2024-11-27 04:44:26.545569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:30.016 [2024-11-27 04:44:26.545584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:30.016 [2024-11-27 04:44:26.545590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:27:30.016 [2024-11-27 04:44:26.545596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:27:30.016 [2024-11-27 04:44:26.545601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:30.017 [2024-11-27 04:44:26.545647] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.169 ms, result 0 00:27:30.017 true 00:27:30.017 04:44:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:27:30.275 { 00:27:30.275 "name": "ftl", 00:27:30.275 "properties": [ 00:27:30.275 { 00:27:30.275 "name": "superblock_version", 00:27:30.275 "value": 5, 00:27:30.275 "read-only": true 00:27:30.275 }, 00:27:30.275 { 00:27:30.275 "name": "base_device", 00:27:30.275 "bands": [ 00:27:30.275 { 00:27:30.275 "id": 0, 00:27:30.275 "state": "FREE", 00:27:30.275 "validity": 0.0 00:27:30.275 }, 00:27:30.275 { 00:27:30.275 "id": 1, 00:27:30.275 "state": "FREE", 00:27:30.275 "validity": 0.0 00:27:30.275 }, 00:27:30.275 { 00:27:30.275 "id": 2, 00:27:30.275 "state": "FREE", 00:27:30.275 "validity": 0.0 00:27:30.275 }, 00:27:30.275 { 00:27:30.275 "id": 3, 00:27:30.275 "state": "FREE", 00:27:30.275 "validity": 0.0 00:27:30.275 }, 00:27:30.275 { 00:27:30.275 "id": 4, 00:27:30.275 "state": "FREE", 00:27:30.275 "validity": 0.0 00:27:30.275 }, 00:27:30.275 { 00:27:30.275 "id": 5, 00:27:30.275 "state": "FREE", 00:27:30.275 "validity": 0.0 00:27:30.275 }, 00:27:30.275 { 00:27:30.275 "id": 6, 00:27:30.275 "state": "FREE", 00:27:30.275 "validity": 0.0 00:27:30.275 }, 00:27:30.275 { 00:27:30.275 "id": 7, 00:27:30.275 "state": "FREE", 00:27:30.275 "validity": 0.0 00:27:30.275 }, 00:27:30.275 { 00:27:30.275 "id": 8, 00:27:30.275 "state": "FREE", 00:27:30.275 "validity": 0.0 00:27:30.275 }, 00:27:30.275 { 00:27:30.275 "id": 9, 00:27:30.275 "state": "FREE", 00:27:30.275 "validity": 0.0 00:27:30.275 }, 00:27:30.275 { 00:27:30.275 "id": 10, 00:27:30.275 "state": "FREE", 00:27:30.275 "validity": 0.0 00:27:30.275 }, 00:27:30.275 { 00:27:30.275 "id": 11, 00:27:30.275 "state": "FREE", 00:27:30.275 "validity": 0.0 00:27:30.275 }, 00:27:30.275 { 00:27:30.275 "id": 12, 00:27:30.275 "state": "FREE", 00:27:30.275 "validity": 0.0 00:27:30.275 }, 00:27:30.275 { 00:27:30.275 "id": 13, 00:27:30.275 "state": "FREE", 00:27:30.275 "validity": 0.0 00:27:30.275 }, 00:27:30.275 { 00:27:30.275 "id": 14, 00:27:30.275 "state": "FREE", 00:27:30.275 "validity": 0.0 00:27:30.275 }, 00:27:30.275 { 00:27:30.275 "id": 15, 00:27:30.275 "state": "FREE", 00:27:30.275 "validity": 0.0 00:27:30.275 }, 00:27:30.275 { 00:27:30.275 "id": 16, 00:27:30.275 "state": "FREE", 00:27:30.275 "validity": 0.0 00:27:30.275 }, 00:27:30.275 { 00:27:30.275 "id": 17, 00:27:30.275 "state": "FREE", 00:27:30.275 "validity": 0.0 00:27:30.275 } 00:27:30.275 ], 00:27:30.275 "read-only": true 00:27:30.275 }, 00:27:30.275 { 00:27:30.275 "name": "cache_device", 00:27:30.275 "type": "bdev", 00:27:30.275 "chunks": [ 00:27:30.275 { 00:27:30.275 "id": 0, 00:27:30.275 "state": "INACTIVE", 00:27:30.275 "utilization": 0.0 00:27:30.275 }, 00:27:30.275 { 00:27:30.275 "id": 1, 00:27:30.275 "state": "CLOSED", 00:27:30.275 "utilization": 1.0 00:27:30.275 }, 00:27:30.275 { 00:27:30.275 "id": 2, 00:27:30.275 "state": "CLOSED", 00:27:30.275 "utilization": 1.0 00:27:30.275 }, 00:27:30.275 { 00:27:30.275 "id": 3, 00:27:30.275 "state": "OPEN", 00:27:30.275 "utilization": 0.001953125 00:27:30.275 }, 00:27:30.275 { 00:27:30.275 "id": 4, 00:27:30.275 "state": "OPEN", 00:27:30.275 "utilization": 0.0 00:27:30.275 } 00:27:30.275 ], 00:27:30.275 "read-only": true 00:27:30.275 }, 00:27:30.275 { 00:27:30.275 "name": "verbose_mode", 00:27:30.275 "value": true, 00:27:30.275 "unit": "", 00:27:30.275 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:27:30.275 }, 00:27:30.275 { 00:27:30.275 "name": "prep_upgrade_on_shutdown", 00:27:30.275 "value": true, 00:27:30.275 "unit": "", 00:27:30.275 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:27:30.275 } 00:27:30.275 ] 00:27:30.275 } 00:27:30.275 04:44:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@74 -- # tcp_target_shutdown 00:27:30.275 04:44:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 79995 ]] 00:27:30.275 04:44:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 79995 00:27:30.275 04:44:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 79995 ']' 00:27:30.275 04:44:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 79995 00:27:30.275 04:44:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:27:30.275 04:44:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:30.275 04:44:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79995 00:27:30.275 killing process with pid 79995 00:27:30.275 04:44:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:30.275 04:44:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:30.275 04:44:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79995' 00:27:30.275 04:44:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 79995 00:27:30.275 04:44:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 79995 00:27:30.840 [2024-11-27 04:44:27.282137] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:27:30.840 [2024-11-27 04:44:27.292045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:30.840 [2024-11-27 04:44:27.292085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:27:30.840 [2024-11-27 04:44:27.292095] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:27:30.840 [2024-11-27 04:44:27.292102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:30.841 [2024-11-27 04:44:27.292119] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:27:30.841 [2024-11-27 04:44:27.294223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:30.841 [2024-11-27 04:44:27.294251] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:27:30.841 [2024-11-27 04:44:27.294260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.093 ms 00:27:30.841 [2024-11-27 04:44:27.294271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:38.955 [2024-11-27 04:44:34.988991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:38.955 [2024-11-27 04:44:34.989057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:27:38.955 [2024-11-27 04:44:34.989075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7694.676 ms 00:27:38.955 [2024-11-27 04:44:34.989083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:38.955 [2024-11-27 04:44:34.990233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:38.955 [2024-11-27 04:44:34.990254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:27:38.955 [2024-11-27 04:44:34.990263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.134 ms 00:27:38.955 [2024-11-27 04:44:34.990271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:38.955 [2024-11-27 04:44:34.991401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:38.955 [2024-11-27 04:44:34.991424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:27:38.955 [2024-11-27 04:44:34.991433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.105 ms 00:27:38.955 [2024-11-27 04:44:34.991446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:38.955 [2024-11-27 04:44:35.001201] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:38.955 [2024-11-27 04:44:35.001234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:27:38.955 [2024-11-27 04:44:35.001243] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.725 ms 00:27:38.955 [2024-11-27 04:44:35.001251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:38.955 [2024-11-27 04:44:35.006968] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:38.955 [2024-11-27 04:44:35.007002] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:27:38.955 [2024-11-27 04:44:35.007012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.685 ms 00:27:38.955 [2024-11-27 04:44:35.007021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:38.955 [2024-11-27 04:44:35.007090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:38.955 [2024-11-27 04:44:35.007104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:27:38.955 [2024-11-27 04:44:35.007112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.037 ms 00:27:38.955 [2024-11-27 04:44:35.007120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:38.955 [2024-11-27 04:44:35.016058] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:38.955 [2024-11-27 04:44:35.016090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:27:38.955 [2024-11-27 04:44:35.016099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.923 ms 00:27:38.955 [2024-11-27 04:44:35.016106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:38.955 [2024-11-27 04:44:35.025147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:38.955 [2024-11-27 04:44:35.025181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:27:38.955 [2024-11-27 04:44:35.025190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.012 ms 00:27:38.955 [2024-11-27 04:44:35.025196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:38.955 [2024-11-27 04:44:35.033858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:38.955 [2024-11-27 04:44:35.033892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:27:38.955 [2024-11-27 04:44:35.033901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.631 ms 00:27:38.955 [2024-11-27 04:44:35.033908] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:38.955 [2024-11-27 04:44:35.042820] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:38.955 [2024-11-27 04:44:35.042851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:27:38.955 [2024-11-27 04:44:35.042860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.842 ms 00:27:38.955 [2024-11-27 04:44:35.042868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:38.956 [2024-11-27 04:44:35.042897] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:27:38.956 [2024-11-27 04:44:35.042919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:27:38.956 [2024-11-27 04:44:35.042929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:27:38.956 [2024-11-27 04:44:35.042937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:27:38.956 [2024-11-27 04:44:35.042945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:38.956 [2024-11-27 04:44:35.042953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:38.956 [2024-11-27 04:44:35.042960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:38.956 [2024-11-27 04:44:35.042967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:38.956 [2024-11-27 04:44:35.042974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:38.956 [2024-11-27 04:44:35.042981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:38.956 [2024-11-27 04:44:35.042989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:38.956 [2024-11-27 04:44:35.042996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:38.956 [2024-11-27 04:44:35.043003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:38.956 [2024-11-27 04:44:35.043010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:38.956 [2024-11-27 04:44:35.043017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:38.956 [2024-11-27 04:44:35.043024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:38.956 [2024-11-27 04:44:35.043031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:38.956 [2024-11-27 04:44:35.043038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:38.956 [2024-11-27 04:44:35.043045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:38.956 [2024-11-27 04:44:35.043054] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:27:38.956 [2024-11-27 04:44:35.043061] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: e710666e-e67e-47bb-9c51-2525cd36c221 00:27:38.956 [2024-11-27 04:44:35.043069] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:27:38.956 [2024-11-27 04:44:35.043076] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 786752 00:27:38.956 [2024-11-27 04:44:35.043083] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 524288 00:27:38.956 [2024-11-27 04:44:35.043090] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: 1.5006 00:27:38.956 [2024-11-27 04:44:35.043100] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:27:38.956 [2024-11-27 04:44:35.043107] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:27:38.956 [2024-11-27 04:44:35.043117] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:27:38.956 [2024-11-27 04:44:35.043128] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:27:38.956 [2024-11-27 04:44:35.043134] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:27:38.956 [2024-11-27 04:44:35.043141] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:38.956 [2024-11-27 04:44:35.043149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:27:38.956 [2024-11-27 04:44:35.043157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.245 ms 00:27:38.956 [2024-11-27 04:44:35.043164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:38.956 [2024-11-27 04:44:35.055534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:38.956 [2024-11-27 04:44:35.055566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:27:38.956 [2024-11-27 04:44:35.055580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.354 ms 00:27:38.956 [2024-11-27 04:44:35.055588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:38.956 [2024-11-27 04:44:35.055936] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:38.956 [2024-11-27 04:44:35.055954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:27:38.956 [2024-11-27 04:44:35.055962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.329 ms 00:27:38.956 [2024-11-27 04:44:35.055969] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:38.956 [2024-11-27 04:44:35.097131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:38.956 [2024-11-27 04:44:35.097170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:27:38.956 [2024-11-27 04:44:35.097181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:38.956 [2024-11-27 04:44:35.097190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:38.956 [2024-11-27 04:44:35.097223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:38.956 [2024-11-27 04:44:35.097231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:27:38.956 [2024-11-27 04:44:35.097239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:38.956 [2024-11-27 04:44:35.097246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:38.956 [2024-11-27 04:44:35.097311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:38.956 [2024-11-27 04:44:35.097320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:27:38.956 [2024-11-27 04:44:35.097332] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:38.956 [2024-11-27 04:44:35.097339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:38.956 [2024-11-27 04:44:35.097355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:38.956 [2024-11-27 04:44:35.097363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:27:38.956 [2024-11-27 04:44:35.097370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:38.956 [2024-11-27 04:44:35.097377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:38.956 [2024-11-27 04:44:35.173940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:38.956 [2024-11-27 04:44:35.173980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:27:38.956 [2024-11-27 04:44:35.173994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:38.956 [2024-11-27 04:44:35.174002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:38.956 [2024-11-27 04:44:35.237216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:38.956 [2024-11-27 04:44:35.237263] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:27:38.956 [2024-11-27 04:44:35.237273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:38.956 [2024-11-27 04:44:35.237280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:38.956 [2024-11-27 04:44:35.237355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:38.956 [2024-11-27 04:44:35.237365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:27:38.956 [2024-11-27 04:44:35.237373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:38.956 [2024-11-27 04:44:35.237385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:38.956 [2024-11-27 04:44:35.237423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:38.956 [2024-11-27 04:44:35.237432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:27:38.956 [2024-11-27 04:44:35.237439] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:38.956 [2024-11-27 04:44:35.237446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:38.956 [2024-11-27 04:44:35.237527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:38.956 [2024-11-27 04:44:35.237536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:27:38.956 [2024-11-27 04:44:35.237544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:38.956 [2024-11-27 04:44:35.237551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:38.956 [2024-11-27 04:44:35.237580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:38.956 [2024-11-27 04:44:35.237589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:27:38.956 [2024-11-27 04:44:35.237597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:38.956 [2024-11-27 04:44:35.237604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:38.956 [2024-11-27 04:44:35.237637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:38.956 [2024-11-27 04:44:35.237645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:27:38.956 [2024-11-27 04:44:35.237652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:38.956 [2024-11-27 04:44:35.237659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:38.956 [2024-11-27 04:44:35.237702] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:38.956 [2024-11-27 04:44:35.237711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:27:38.956 [2024-11-27 04:44:35.237719] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:38.956 [2024-11-27 04:44:35.237756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:38.956 [2024-11-27 04:44:35.237868] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 7945.772 ms, result 0 00:27:47.063 04:44:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:27:47.063 04:44:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@75 -- # tcp_target_setup 00:27:47.063 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:47.063 04:44:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:27:47.063 04:44:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:27:47.063 04:44:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:27:47.063 04:44:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=80493 00:27:47.063 04:44:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:27:47.063 04:44:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 80493 00:27:47.063 04:44:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 80493 ']' 00:27:47.063 04:44:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:27:47.063 04:44:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:47.063 04:44:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:47.063 04:44:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:47.063 04:44:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:47.063 04:44:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:47.063 [2024-11-27 04:44:42.389645] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:27:47.063 [2024-11-27 04:44:42.390189] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80493 ] 00:27:47.063 [2024-11-27 04:44:42.554646] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:47.063 [2024-11-27 04:44:42.651463] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:47.063 [2024-11-27 04:44:43.335903] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:27:47.063 [2024-11-27 04:44:43.335972] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:27:47.063 [2024-11-27 04:44:43.480125] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:47.063 [2024-11-27 04:44:43.480169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:27:47.063 [2024-11-27 04:44:43.480182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:27:47.063 [2024-11-27 04:44:43.480190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:47.063 [2024-11-27 04:44:43.480243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:47.063 [2024-11-27 04:44:43.480253] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:27:47.063 [2024-11-27 04:44:43.480260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.032 ms 00:27:47.063 [2024-11-27 04:44:43.480267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:47.063 [2024-11-27 04:44:43.480286] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:27:47.063 [2024-11-27 04:44:43.480941] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:27:47.063 [2024-11-27 04:44:43.480958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:47.063 [2024-11-27 04:44:43.480965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:27:47.063 [2024-11-27 04:44:43.480973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.675 ms 00:27:47.063 [2024-11-27 04:44:43.480995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:47.063 [2024-11-27 04:44:43.482456] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:27:47.063 [2024-11-27 04:44:43.494458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:47.063 [2024-11-27 04:44:43.494497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:27:47.063 [2024-11-27 04:44:43.494508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.003 ms 00:27:47.063 [2024-11-27 04:44:43.494517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:47.063 [2024-11-27 04:44:43.494567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:47.063 [2024-11-27 04:44:43.494576] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:27:47.063 [2024-11-27 04:44:43.494584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.016 ms 00:27:47.063 [2024-11-27 04:44:43.494591] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:47.063 [2024-11-27 04:44:43.499390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:47.063 [2024-11-27 04:44:43.499421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:27:47.063 [2024-11-27 04:44:43.499432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.738 ms 00:27:47.063 [2024-11-27 04:44:43.499439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:47.063 [2024-11-27 04:44:43.499493] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:47.063 [2024-11-27 04:44:43.499504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:27:47.063 [2024-11-27 04:44:43.499511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.033 ms 00:27:47.063 [2024-11-27 04:44:43.499518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:47.063 [2024-11-27 04:44:43.499559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:47.063 [2024-11-27 04:44:43.499571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:27:47.063 [2024-11-27 04:44:43.499579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:27:47.063 [2024-11-27 04:44:43.499585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:47.063 [2024-11-27 04:44:43.499606] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:27:47.063 [2024-11-27 04:44:43.502935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:47.063 [2024-11-27 04:44:43.502965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:27:47.063 [2024-11-27 04:44:43.502977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.333 ms 00:27:47.063 [2024-11-27 04:44:43.502984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:47.063 [2024-11-27 04:44:43.503012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:47.063 [2024-11-27 04:44:43.503020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:27:47.063 [2024-11-27 04:44:43.503028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:27:47.063 [2024-11-27 04:44:43.503034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:47.063 [2024-11-27 04:44:43.503053] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:27:47.063 [2024-11-27 04:44:43.503072] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:27:47.063 [2024-11-27 04:44:43.503105] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:27:47.063 [2024-11-27 04:44:43.503120] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:27:47.063 [2024-11-27 04:44:43.503220] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:27:47.063 [2024-11-27 04:44:43.503229] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:27:47.063 [2024-11-27 04:44:43.503239] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:27:47.063 [2024-11-27 04:44:43.503248] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:27:47.063 [2024-11-27 04:44:43.503259] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:27:47.063 [2024-11-27 04:44:43.503267] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:27:47.063 [2024-11-27 04:44:43.503274] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:27:47.063 [2024-11-27 04:44:43.503280] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:27:47.063 [2024-11-27 04:44:43.503287] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:27:47.063 [2024-11-27 04:44:43.503294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:47.063 [2024-11-27 04:44:43.503302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:27:47.063 [2024-11-27 04:44:43.503309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.243 ms 00:27:47.063 [2024-11-27 04:44:43.503316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:47.063 [2024-11-27 04:44:43.503400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:47.063 [2024-11-27 04:44:43.503408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:27:47.063 [2024-11-27 04:44:43.503417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.068 ms 00:27:47.063 [2024-11-27 04:44:43.503424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:47.063 [2024-11-27 04:44:43.503522] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:27:47.063 [2024-11-27 04:44:43.503532] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:27:47.063 [2024-11-27 04:44:43.503540] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:27:47.063 [2024-11-27 04:44:43.503547] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:47.063 [2024-11-27 04:44:43.503554] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:27:47.063 [2024-11-27 04:44:43.503561] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:27:47.063 [2024-11-27 04:44:43.503567] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:27:47.063 [2024-11-27 04:44:43.503573] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:27:47.063 [2024-11-27 04:44:43.503580] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:27:47.063 [2024-11-27 04:44:43.503586] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:47.063 [2024-11-27 04:44:43.503592] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:27:47.064 [2024-11-27 04:44:43.503599] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:27:47.064 [2024-11-27 04:44:43.503605] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:47.064 [2024-11-27 04:44:43.503612] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:27:47.064 [2024-11-27 04:44:43.503618] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:27:47.064 [2024-11-27 04:44:43.503624] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:47.064 [2024-11-27 04:44:43.503634] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:27:47.064 [2024-11-27 04:44:43.503642] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:27:47.064 [2024-11-27 04:44:43.503648] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:47.064 [2024-11-27 04:44:43.503654] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:27:47.064 [2024-11-27 04:44:43.503661] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:27:47.064 [2024-11-27 04:44:43.503667] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:47.064 [2024-11-27 04:44:43.503673] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:27:47.064 [2024-11-27 04:44:43.503686] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:27:47.064 [2024-11-27 04:44:43.503692] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:47.064 [2024-11-27 04:44:43.503699] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:27:47.064 [2024-11-27 04:44:43.503705] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:27:47.064 [2024-11-27 04:44:43.503711] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:47.064 [2024-11-27 04:44:43.503717] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:27:47.064 [2024-11-27 04:44:43.503736] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:27:47.064 [2024-11-27 04:44:43.503742] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:47.064 [2024-11-27 04:44:43.503749] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:27:47.064 [2024-11-27 04:44:43.503755] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:27:47.064 [2024-11-27 04:44:43.503761] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:47.064 [2024-11-27 04:44:43.503767] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:27:47.064 [2024-11-27 04:44:43.503774] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:27:47.064 [2024-11-27 04:44:43.503780] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:47.064 [2024-11-27 04:44:43.503786] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:27:47.064 [2024-11-27 04:44:43.503792] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:27:47.064 [2024-11-27 04:44:43.503798] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:47.064 [2024-11-27 04:44:43.503805] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:27:47.064 [2024-11-27 04:44:43.503811] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:27:47.064 [2024-11-27 04:44:43.503817] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:47.064 [2024-11-27 04:44:43.503823] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:27:47.064 [2024-11-27 04:44:43.503831] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:27:47.064 [2024-11-27 04:44:43.503838] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:27:47.064 [2024-11-27 04:44:43.503847] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:47.064 [2024-11-27 04:44:43.503854] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:27:47.064 [2024-11-27 04:44:43.503863] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:27:47.064 [2024-11-27 04:44:43.503871] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:27:47.064 [2024-11-27 04:44:43.503877] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:27:47.064 [2024-11-27 04:44:43.503884] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:27:47.064 [2024-11-27 04:44:43.503890] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:27:47.064 [2024-11-27 04:44:43.503898] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:27:47.064 [2024-11-27 04:44:43.503907] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:47.064 [2024-11-27 04:44:43.503917] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:27:47.064 [2024-11-27 04:44:43.503926] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:27:47.064 [2024-11-27 04:44:43.503932] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:27:47.064 [2024-11-27 04:44:43.503939] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:27:47.064 [2024-11-27 04:44:43.503946] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:27:47.064 [2024-11-27 04:44:43.503953] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:27:47.064 [2024-11-27 04:44:43.503960] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:27:47.064 [2024-11-27 04:44:43.503967] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:27:47.064 [2024-11-27 04:44:43.503973] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:27:47.064 [2024-11-27 04:44:43.503980] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:27:47.064 [2024-11-27 04:44:43.503987] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:27:47.064 [2024-11-27 04:44:43.503993] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:27:47.064 [2024-11-27 04:44:43.504000] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:27:47.064 [2024-11-27 04:44:43.504007] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:27:47.064 [2024-11-27 04:44:43.504014] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:27:47.064 [2024-11-27 04:44:43.504021] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:47.064 [2024-11-27 04:44:43.504028] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:47.064 [2024-11-27 04:44:43.504035] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:27:47.064 [2024-11-27 04:44:43.504042] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:27:47.064 [2024-11-27 04:44:43.504049] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:27:47.064 [2024-11-27 04:44:43.504056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:47.064 [2024-11-27 04:44:43.504063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:27:47.064 [2024-11-27 04:44:43.504070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.601 ms 00:27:47.064 [2024-11-27 04:44:43.504076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:47.064 [2024-11-27 04:44:43.504130] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:27:47.064 [2024-11-27 04:44:43.504143] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:27:49.594 [2024-11-27 04:44:45.783613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:49.594 [2024-11-27 04:44:45.783678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:27:49.594 [2024-11-27 04:44:45.783693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2279.475 ms 00:27:49.594 [2024-11-27 04:44:45.783701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:49.594 [2024-11-27 04:44:45.808634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:49.594 [2024-11-27 04:44:45.808681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:27:49.594 [2024-11-27 04:44:45.808695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 24.717 ms 00:27:49.594 [2024-11-27 04:44:45.808703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:49.594 [2024-11-27 04:44:45.808800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:49.594 [2024-11-27 04:44:45.808812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:27:49.594 [2024-11-27 04:44:45.808821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:27:49.594 [2024-11-27 04:44:45.808828] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:49.594 [2024-11-27 04:44:45.838893] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:49.594 [2024-11-27 04:44:45.838933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:27:49.594 [2024-11-27 04:44:45.838947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 30.028 ms 00:27:49.594 [2024-11-27 04:44:45.838955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:49.594 [2024-11-27 04:44:45.838984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:49.594 [2024-11-27 04:44:45.838992] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:27:49.594 [2024-11-27 04:44:45.839000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:27:49.594 [2024-11-27 04:44:45.839007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:49.594 [2024-11-27 04:44:45.839345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:49.594 [2024-11-27 04:44:45.839361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:27:49.594 [2024-11-27 04:44:45.839369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.283 ms 00:27:49.594 [2024-11-27 04:44:45.839380] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:49.594 [2024-11-27 04:44:45.839417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:49.594 [2024-11-27 04:44:45.839426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:27:49.594 [2024-11-27 04:44:45.839434] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.019 ms 00:27:49.594 [2024-11-27 04:44:45.839441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:49.594 [2024-11-27 04:44:45.853252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:49.594 [2024-11-27 04:44:45.853284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:27:49.594 [2024-11-27 04:44:45.853294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.789 ms 00:27:49.594 [2024-11-27 04:44:45.853301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:49.594 [2024-11-27 04:44:45.882717] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 0, empty chunks = 4 00:27:49.594 [2024-11-27 04:44:45.882775] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:27:49.594 [2024-11-27 04:44:45.882788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:49.594 [2024-11-27 04:44:45.882797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore NV cache metadata 00:27:49.594 [2024-11-27 04:44:45.882809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 29.390 ms 00:27:49.594 [2024-11-27 04:44:45.882816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:49.594 [2024-11-27 04:44:45.896527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:49.594 [2024-11-27 04:44:45.896563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid map metadata 00:27:49.594 [2024-11-27 04:44:45.896575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.669 ms 00:27:49.594 [2024-11-27 04:44:45.896584] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:49.594 [2024-11-27 04:44:45.907784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:49.594 [2024-11-27 04:44:45.907816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore band info metadata 00:27:49.594 [2024-11-27 04:44:45.907826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.161 ms 00:27:49.594 [2024-11-27 04:44:45.907833] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:49.594 [2024-11-27 04:44:45.918888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:49.594 [2024-11-27 04:44:45.918920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore trim metadata 00:27:49.594 [2024-11-27 04:44:45.918930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.022 ms 00:27:49.594 [2024-11-27 04:44:45.918937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:49.594 [2024-11-27 04:44:45.919555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:49.594 [2024-11-27 04:44:45.919580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:27:49.594 [2024-11-27 04:44:45.919590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.530 ms 00:27:49.594 [2024-11-27 04:44:45.919597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:49.594 [2024-11-27 04:44:45.976127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:49.594 [2024-11-27 04:44:45.976180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:27:49.594 [2024-11-27 04:44:45.976194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 56.510 ms 00:27:49.594 [2024-11-27 04:44:45.976203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:49.594 [2024-11-27 04:44:45.986396] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:27:49.594 [2024-11-27 04:44:45.987060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:49.594 [2024-11-27 04:44:45.987089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:27:49.594 [2024-11-27 04:44:45.987099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.797 ms 00:27:49.594 [2024-11-27 04:44:45.987106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:49.594 [2024-11-27 04:44:45.987183] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:49.595 [2024-11-27 04:44:45.987197] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P 00:27:49.595 [2024-11-27 04:44:45.987206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.011 ms 00:27:49.595 [2024-11-27 04:44:45.987213] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:49.595 [2024-11-27 04:44:45.987281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:49.595 [2024-11-27 04:44:45.987292] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:27:49.595 [2024-11-27 04:44:45.987300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.016 ms 00:27:49.595 [2024-11-27 04:44:45.987307] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:49.595 [2024-11-27 04:44:45.987327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:49.595 [2024-11-27 04:44:45.987335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:27:49.595 [2024-11-27 04:44:45.987346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:27:49.595 [2024-11-27 04:44:45.987353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:49.595 [2024-11-27 04:44:45.987383] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:27:49.595 [2024-11-27 04:44:45.987393] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:49.595 [2024-11-27 04:44:45.987400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:27:49.595 [2024-11-27 04:44:45.987408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:27:49.595 [2024-11-27 04:44:45.987415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:49.595 [2024-11-27 04:44:46.010118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:49.595 [2024-11-27 04:44:46.010156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:27:49.595 [2024-11-27 04:44:46.010167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 22.684 ms 00:27:49.595 [2024-11-27 04:44:46.010175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:49.595 [2024-11-27 04:44:46.010244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:49.595 [2024-11-27 04:44:46.010254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:27:49.595 [2024-11-27 04:44:46.010262] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.035 ms 00:27:49.595 [2024-11-27 04:44:46.010269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:49.595 [2024-11-27 04:44:46.011192] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 2530.660 ms, result 0 00:27:49.595 [2024-11-27 04:44:46.026480] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:49.595 [2024-11-27 04:44:46.042455] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:27:49.595 [2024-11-27 04:44:46.050581] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:27:50.159 04:44:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:50.159 04:44:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:27:50.159 04:44:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:27:50.159 04:44:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:27:50.159 04:44:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:27:50.417 [2024-11-27 04:44:46.771261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:50.417 [2024-11-27 04:44:46.771316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:27:50.417 [2024-11-27 04:44:46.771333] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:27:50.417 [2024-11-27 04:44:46.771341] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:50.417 [2024-11-27 04:44:46.771366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:50.417 [2024-11-27 04:44:46.771375] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:27:50.417 [2024-11-27 04:44:46.771383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:27:50.417 [2024-11-27 04:44:46.771390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:50.417 [2024-11-27 04:44:46.771409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:50.417 [2024-11-27 04:44:46.771417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:27:50.417 [2024-11-27 04:44:46.771426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:27:50.417 [2024-11-27 04:44:46.771433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:50.417 [2024-11-27 04:44:46.771490] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.220 ms, result 0 00:27:50.417 true 00:27:50.417 04:44:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:27:50.417 { 00:27:50.417 "name": "ftl", 00:27:50.417 "properties": [ 00:27:50.417 { 00:27:50.417 "name": "superblock_version", 00:27:50.417 "value": 5, 00:27:50.417 "read-only": true 00:27:50.417 }, 00:27:50.417 { 00:27:50.417 "name": "base_device", 00:27:50.417 "bands": [ 00:27:50.417 { 00:27:50.417 "id": 0, 00:27:50.417 "state": "CLOSED", 00:27:50.417 "validity": 1.0 00:27:50.417 }, 00:27:50.417 { 00:27:50.417 "id": 1, 00:27:50.417 "state": "CLOSED", 00:27:50.417 "validity": 1.0 00:27:50.417 }, 00:27:50.417 { 00:27:50.417 "id": 2, 00:27:50.417 "state": "CLOSED", 00:27:50.417 "validity": 0.007843137254901933 00:27:50.417 }, 00:27:50.417 { 00:27:50.417 "id": 3, 00:27:50.417 "state": "FREE", 00:27:50.417 "validity": 0.0 00:27:50.417 }, 00:27:50.417 { 00:27:50.417 "id": 4, 00:27:50.417 "state": "FREE", 00:27:50.417 "validity": 0.0 00:27:50.417 }, 00:27:50.417 { 00:27:50.417 "id": 5, 00:27:50.417 "state": "FREE", 00:27:50.417 "validity": 0.0 00:27:50.417 }, 00:27:50.417 { 00:27:50.417 "id": 6, 00:27:50.417 "state": "FREE", 00:27:50.417 "validity": 0.0 00:27:50.417 }, 00:27:50.417 { 00:27:50.417 "id": 7, 00:27:50.417 "state": "FREE", 00:27:50.417 "validity": 0.0 00:27:50.417 }, 00:27:50.417 { 00:27:50.417 "id": 8, 00:27:50.417 "state": "FREE", 00:27:50.417 "validity": 0.0 00:27:50.417 }, 00:27:50.417 { 00:27:50.417 "id": 9, 00:27:50.417 "state": "FREE", 00:27:50.417 "validity": 0.0 00:27:50.417 }, 00:27:50.417 { 00:27:50.417 "id": 10, 00:27:50.417 "state": "FREE", 00:27:50.417 "validity": 0.0 00:27:50.417 }, 00:27:50.417 { 00:27:50.417 "id": 11, 00:27:50.417 "state": "FREE", 00:27:50.417 "validity": 0.0 00:27:50.417 }, 00:27:50.417 { 00:27:50.417 "id": 12, 00:27:50.417 "state": "FREE", 00:27:50.417 "validity": 0.0 00:27:50.417 }, 00:27:50.417 { 00:27:50.417 "id": 13, 00:27:50.417 "state": "FREE", 00:27:50.417 "validity": 0.0 00:27:50.417 }, 00:27:50.417 { 00:27:50.417 "id": 14, 00:27:50.417 "state": "FREE", 00:27:50.417 "validity": 0.0 00:27:50.417 }, 00:27:50.417 { 00:27:50.417 "id": 15, 00:27:50.417 "state": "FREE", 00:27:50.417 "validity": 0.0 00:27:50.417 }, 00:27:50.417 { 00:27:50.417 "id": 16, 00:27:50.417 "state": "FREE", 00:27:50.417 "validity": 0.0 00:27:50.417 }, 00:27:50.417 { 00:27:50.417 "id": 17, 00:27:50.417 "state": "FREE", 00:27:50.417 "validity": 0.0 00:27:50.417 } 00:27:50.417 ], 00:27:50.417 "read-only": true 00:27:50.417 }, 00:27:50.417 { 00:27:50.417 "name": "cache_device", 00:27:50.417 "type": "bdev", 00:27:50.417 "chunks": [ 00:27:50.417 { 00:27:50.417 "id": 0, 00:27:50.417 "state": "INACTIVE", 00:27:50.417 "utilization": 0.0 00:27:50.417 }, 00:27:50.417 { 00:27:50.417 "id": 1, 00:27:50.417 "state": "OPEN", 00:27:50.417 "utilization": 0.0 00:27:50.417 }, 00:27:50.417 { 00:27:50.417 "id": 2, 00:27:50.417 "state": "OPEN", 00:27:50.417 "utilization": 0.0 00:27:50.417 }, 00:27:50.417 { 00:27:50.417 "id": 3, 00:27:50.417 "state": "FREE", 00:27:50.417 "utilization": 0.0 00:27:50.417 }, 00:27:50.417 { 00:27:50.418 "id": 4, 00:27:50.418 "state": "FREE", 00:27:50.418 "utilization": 0.0 00:27:50.418 } 00:27:50.418 ], 00:27:50.418 "read-only": true 00:27:50.418 }, 00:27:50.418 { 00:27:50.418 "name": "verbose_mode", 00:27:50.418 "value": true, 00:27:50.418 "unit": "", 00:27:50.418 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:27:50.418 }, 00:27:50.418 { 00:27:50.418 "name": "prep_upgrade_on_shutdown", 00:27:50.418 "value": false, 00:27:50.418 "unit": "", 00:27:50.418 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:27:50.418 } 00:27:50.418 ] 00:27:50.418 } 00:27:50.418 04:44:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:27:50.418 04:44:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # ftl_get_properties 00:27:50.418 04:44:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:27:50.675 04:44:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # used=0 00:27:50.675 04:44:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@83 -- # [[ 0 -ne 0 ]] 00:27:50.675 04:44:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # ftl_get_properties 00:27:50.675 04:44:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # jq '[.properties[] | select(.name == "bands") | .bands[] | select(.state == "OPENED")] | length' 00:27:50.675 04:44:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:27:50.933 04:44:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # opened=0 00:27:50.933 04:44:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@90 -- # [[ 0 -ne 0 ]] 00:27:50.933 04:44:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@111 -- # test_validate_checksum 00:27:50.933 04:44:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:27:50.933 04:44:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:27:50.933 04:44:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:27:50.933 Validate MD5 checksum, iteration 1 00:27:50.933 04:44:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:27:50.933 04:44:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:27:50.933 04:44:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:50.933 04:44:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:50.933 04:44:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:50.933 04:44:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:27:50.933 04:44:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:27:50.933 [2024-11-27 04:44:47.448809] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:27:50.933 [2024-11-27 04:44:47.448926] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80566 ] 00:27:51.190 [2024-11-27 04:44:47.608307] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:51.190 [2024-11-27 04:44:47.707447] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:53.086  [2024-11-27T04:44:49.931Z] Copying: 706/1024 [MB] (706 MBps) [2024-11-27T04:44:50.864Z] Copying: 1024/1024 [MB] (average 692 MBps) 00:27:54.277 00:27:54.277 04:44:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:27:54.277 04:44:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:27:56.806 04:44:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:27:56.806 04:44:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=44e60ff2755ee7aa951131185392f4f4 00:27:56.806 04:44:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 44e60ff2755ee7aa951131185392f4f4 != \4\4\e\6\0\f\f\2\7\5\5\e\e\7\a\a\9\5\1\1\3\1\1\8\5\3\9\2\f\4\f\4 ]] 00:27:56.806 04:44:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:27:56.806 04:44:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:27:56.806 04:44:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:27:56.806 Validate MD5 checksum, iteration 2 00:27:56.806 04:44:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:27:56.806 04:44:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:56.806 04:44:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:56.806 04:44:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:56.806 04:44:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:27:56.806 04:44:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:27:56.806 [2024-11-27 04:44:52.959531] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:27:56.806 [2024-11-27 04:44:52.959655] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80634 ] 00:27:56.806 [2024-11-27 04:44:53.119587] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:56.806 [2024-11-27 04:44:53.219575] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:58.179  [2024-11-27T04:44:55.331Z] Copying: 684/1024 [MB] (684 MBps) [2024-11-27T04:44:56.283Z] Copying: 1024/1024 [MB] (average 679 MBps) 00:27:59.696 00:27:59.696 04:44:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:27:59.696 04:44:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:28:01.070 04:44:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:28:01.070 04:44:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=431355f9c130cd249fae20154bcb4684 00:28:01.070 04:44:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 431355f9c130cd249fae20154bcb4684 != \4\3\1\3\5\5\f\9\c\1\3\0\c\d\2\4\9\f\a\e\2\0\1\5\4\b\c\b\4\6\8\4 ]] 00:28:01.070 04:44:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:28:01.070 04:44:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:28:01.070 04:44:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@114 -- # tcp_target_shutdown_dirty 00:28:01.070 04:44:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@137 -- # [[ -n 80493 ]] 00:28:01.070 04:44:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@138 -- # kill -9 80493 00:28:01.070 04:44:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@139 -- # unset spdk_tgt_pid 00:28:01.070 04:44:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@115 -- # tcp_target_setup 00:28:01.070 04:44:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:28:01.070 04:44:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:28:01.070 04:44:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:28:01.070 04:44:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=80684 00:28:01.070 04:44:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:28:01.070 04:44:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 80684 00:28:01.070 04:44:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 80684 ']' 00:28:01.070 04:44:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:01.070 04:44:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:01.070 04:44:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:01.070 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:01.070 04:44:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:01.070 04:44:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:28:01.070 04:44:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:01.328 [2024-11-27 04:44:57.707847] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:28:01.328 [2024-11-27 04:44:57.707969] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80684 ] 00:28:01.328 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 834: 80493 Killed $spdk_tgt_bin "--cpumask=$spdk_tgt_cpumask" --config="$spdk_tgt_cnfg" 00:28:01.328 [2024-11-27 04:44:57.862516] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:01.586 [2024-11-27 04:44:57.947484] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:02.152 [2024-11-27 04:44:58.534810] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:28:02.152 [2024-11-27 04:44:58.534868] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:28:02.152 [2024-11-27 04:44:58.678181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:02.152 [2024-11-27 04:44:58.678235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:28:02.152 [2024-11-27 04:44:58.678248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:28:02.152 [2024-11-27 04:44:58.678256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:02.152 [2024-11-27 04:44:58.678310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:02.152 [2024-11-27 04:44:58.678320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:28:02.152 [2024-11-27 04:44:58.678329] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.033 ms 00:28:02.152 [2024-11-27 04:44:58.678336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:02.152 [2024-11-27 04:44:58.678357] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:28:02.152 [2024-11-27 04:44:58.679046] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:28:02.152 [2024-11-27 04:44:58.679074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:02.152 [2024-11-27 04:44:58.679082] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:28:02.152 [2024-11-27 04:44:58.679090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.721 ms 00:28:02.152 [2024-11-27 04:44:58.679098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:02.152 [2024-11-27 04:44:58.679447] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:28:02.152 [2024-11-27 04:44:58.694924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:02.152 [2024-11-27 04:44:58.694960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:28:02.152 [2024-11-27 04:44:58.694973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 15.477 ms 00:28:02.152 [2024-11-27 04:44:58.694981] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:02.152 [2024-11-27 04:44:58.703821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:02.152 [2024-11-27 04:44:58.703855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:28:02.152 [2024-11-27 04:44:58.703865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.028 ms 00:28:02.152 [2024-11-27 04:44:58.703872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:02.152 [2024-11-27 04:44:58.704183] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:02.152 [2024-11-27 04:44:58.704206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:28:02.152 [2024-11-27 04:44:58.704215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.238 ms 00:28:02.152 [2024-11-27 04:44:58.704222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:02.152 [2024-11-27 04:44:58.704269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:02.152 [2024-11-27 04:44:58.704278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:28:02.152 [2024-11-27 04:44:58.704286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.029 ms 00:28:02.152 [2024-11-27 04:44:58.704298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:02.152 [2024-11-27 04:44:58.704323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:02.152 [2024-11-27 04:44:58.704337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:28:02.152 [2024-11-27 04:44:58.704345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:28:02.152 [2024-11-27 04:44:58.704352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:02.152 [2024-11-27 04:44:58.704372] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:28:02.152 [2024-11-27 04:44:58.707335] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:02.152 [2024-11-27 04:44:58.707371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:28:02.152 [2024-11-27 04:44:58.707380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.967 ms 00:28:02.152 [2024-11-27 04:44:58.707392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:02.152 [2024-11-27 04:44:58.707422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:02.152 [2024-11-27 04:44:58.707431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:28:02.152 [2024-11-27 04:44:58.707439] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:28:02.152 [2024-11-27 04:44:58.707446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:02.152 [2024-11-27 04:44:58.707465] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:28:02.152 [2024-11-27 04:44:58.707481] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:28:02.152 [2024-11-27 04:44:58.707515] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:28:02.152 [2024-11-27 04:44:58.707531] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:28:02.152 [2024-11-27 04:44:58.707632] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:28:02.152 [2024-11-27 04:44:58.707648] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:28:02.152 [2024-11-27 04:44:58.707658] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:28:02.153 [2024-11-27 04:44:58.707668] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:28:02.153 [2024-11-27 04:44:58.707678] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:28:02.153 [2024-11-27 04:44:58.707685] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:28:02.153 [2024-11-27 04:44:58.707692] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:28:02.153 [2024-11-27 04:44:58.707699] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:28:02.153 [2024-11-27 04:44:58.707706] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:28:02.153 [2024-11-27 04:44:58.707715] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:02.153 [2024-11-27 04:44:58.707733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:28:02.153 [2024-11-27 04:44:58.707741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.252 ms 00:28:02.153 [2024-11-27 04:44:58.707748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:02.153 [2024-11-27 04:44:58.707831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:02.153 [2024-11-27 04:44:58.707839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:28:02.153 [2024-11-27 04:44:58.707846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.068 ms 00:28:02.153 [2024-11-27 04:44:58.707854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:02.153 [2024-11-27 04:44:58.707966] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:28:02.153 [2024-11-27 04:44:58.707984] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:28:02.153 [2024-11-27 04:44:58.707992] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:28:02.153 [2024-11-27 04:44:58.708000] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:02.153 [2024-11-27 04:44:58.708007] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:28:02.153 [2024-11-27 04:44:58.708014] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:28:02.153 [2024-11-27 04:44:58.708021] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:28:02.153 [2024-11-27 04:44:58.708027] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:28:02.153 [2024-11-27 04:44:58.708034] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:28:02.153 [2024-11-27 04:44:58.708041] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:02.153 [2024-11-27 04:44:58.708047] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:28:02.153 [2024-11-27 04:44:58.708053] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:28:02.153 [2024-11-27 04:44:58.708059] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:02.153 [2024-11-27 04:44:58.708065] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:28:02.153 [2024-11-27 04:44:58.708072] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:28:02.153 [2024-11-27 04:44:58.708080] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:02.153 [2024-11-27 04:44:58.708087] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:28:02.153 [2024-11-27 04:44:58.708093] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:28:02.153 [2024-11-27 04:44:58.708099] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:02.153 [2024-11-27 04:44:58.708106] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:28:02.153 [2024-11-27 04:44:58.708112] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:28:02.153 [2024-11-27 04:44:58.708124] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:28:02.153 [2024-11-27 04:44:58.708131] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:28:02.153 [2024-11-27 04:44:58.708137] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:28:02.153 [2024-11-27 04:44:58.708143] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:28:02.153 [2024-11-27 04:44:58.708150] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:28:02.153 [2024-11-27 04:44:58.708156] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:28:02.153 [2024-11-27 04:44:58.708163] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:28:02.153 [2024-11-27 04:44:58.708169] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:28:02.153 [2024-11-27 04:44:58.708175] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:28:02.153 [2024-11-27 04:44:58.708182] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:28:02.153 [2024-11-27 04:44:58.708188] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:28:02.153 [2024-11-27 04:44:58.708194] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:28:02.153 [2024-11-27 04:44:58.708200] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:02.153 [2024-11-27 04:44:58.708206] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:28:02.153 [2024-11-27 04:44:58.708212] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:28:02.153 [2024-11-27 04:44:58.708218] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:02.153 [2024-11-27 04:44:58.708225] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:28:02.153 [2024-11-27 04:44:58.708231] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:28:02.153 [2024-11-27 04:44:58.708237] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:02.153 [2024-11-27 04:44:58.708244] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:28:02.153 [2024-11-27 04:44:58.708250] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:28:02.153 [2024-11-27 04:44:58.708256] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:02.153 [2024-11-27 04:44:58.708263] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:28:02.153 [2024-11-27 04:44:58.708270] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:28:02.153 [2024-11-27 04:44:58.708277] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:28:02.153 [2024-11-27 04:44:58.708284] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:02.153 [2024-11-27 04:44:58.708292] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:28:02.153 [2024-11-27 04:44:58.708299] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:28:02.153 [2024-11-27 04:44:58.708305] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:28:02.153 [2024-11-27 04:44:58.708312] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:28:02.153 [2024-11-27 04:44:58.708318] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:28:02.153 [2024-11-27 04:44:58.708324] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:28:02.153 [2024-11-27 04:44:58.708332] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:28:02.153 [2024-11-27 04:44:58.708341] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:02.153 [2024-11-27 04:44:58.708349] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:28:02.153 [2024-11-27 04:44:58.708356] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:28:02.153 [2024-11-27 04:44:58.708363] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:28:02.153 [2024-11-27 04:44:58.708370] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:28:02.153 [2024-11-27 04:44:58.708376] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:28:02.153 [2024-11-27 04:44:58.708383] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:28:02.153 [2024-11-27 04:44:58.708390] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:28:02.153 [2024-11-27 04:44:58.708397] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:28:02.153 [2024-11-27 04:44:58.708404] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:28:02.153 [2024-11-27 04:44:58.708411] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:28:02.153 [2024-11-27 04:44:58.708417] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:28:02.153 [2024-11-27 04:44:58.708424] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:28:02.153 [2024-11-27 04:44:58.708431] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:28:02.153 [2024-11-27 04:44:58.708438] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:28:02.153 [2024-11-27 04:44:58.708445] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:28:02.153 [2024-11-27 04:44:58.708453] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:02.153 [2024-11-27 04:44:58.708463] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:28:02.153 [2024-11-27 04:44:58.708470] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:28:02.153 [2024-11-27 04:44:58.708477] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:28:02.153 [2024-11-27 04:44:58.708484] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:28:02.153 [2024-11-27 04:44:58.708491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:02.153 [2024-11-27 04:44:58.708498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:28:02.153 [2024-11-27 04:44:58.708505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.592 ms 00:28:02.153 [2024-11-27 04:44:58.708512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:02.153 [2024-11-27 04:44:58.732446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:02.153 [2024-11-27 04:44:58.732479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:28:02.153 [2024-11-27 04:44:58.732489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 23.887 ms 00:28:02.153 [2024-11-27 04:44:58.732496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:02.153 [2024-11-27 04:44:58.732530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:02.153 [2024-11-27 04:44:58.732538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:28:02.153 [2024-11-27 04:44:58.732546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.012 ms 00:28:02.153 [2024-11-27 04:44:58.732553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:02.412 [2024-11-27 04:44:58.762750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:02.412 [2024-11-27 04:44:58.762784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:28:02.412 [2024-11-27 04:44:58.762794] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 30.148 ms 00:28:02.412 [2024-11-27 04:44:58.762802] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:02.412 [2024-11-27 04:44:58.762828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:02.412 [2024-11-27 04:44:58.762836] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:28:02.412 [2024-11-27 04:44:58.762844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:28:02.412 [2024-11-27 04:44:58.762853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:02.412 [2024-11-27 04:44:58.762938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:02.412 [2024-11-27 04:44:58.762948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:28:02.412 [2024-11-27 04:44:58.762956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.038 ms 00:28:02.412 [2024-11-27 04:44:58.762963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:02.412 [2024-11-27 04:44:58.762999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:02.412 [2024-11-27 04:44:58.763008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:28:02.412 [2024-11-27 04:44:58.763015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.020 ms 00:28:02.412 [2024-11-27 04:44:58.763022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:02.412 [2024-11-27 04:44:58.777006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:02.412 [2024-11-27 04:44:58.777036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:28:02.412 [2024-11-27 04:44:58.777046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.959 ms 00:28:02.412 [2024-11-27 04:44:58.777056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:02.412 [2024-11-27 04:44:58.777156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:02.412 [2024-11-27 04:44:58.777167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize recovery 00:28:02.412 [2024-11-27 04:44:58.777175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:28:02.412 [2024-11-27 04:44:58.777183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:02.412 [2024-11-27 04:44:58.811094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:02.412 [2024-11-27 04:44:58.811133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover band state 00:28:02.412 [2024-11-27 04:44:58.811146] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 33.893 ms 00:28:02.412 [2024-11-27 04:44:58.811154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:02.412 [2024-11-27 04:44:58.821110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:02.412 [2024-11-27 04:44:58.821165] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:28:02.412 [2024-11-27 04:44:58.821180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.839 ms 00:28:02.412 [2024-11-27 04:44:58.821193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:02.412 [2024-11-27 04:44:58.875955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:02.412 [2024-11-27 04:44:58.876015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:28:02.412 [2024-11-27 04:44:58.876029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 54.691 ms 00:28:02.412 [2024-11-27 04:44:58.876037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:02.412 [2024-11-27 04:44:58.876166] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=0 found seq_id=8 00:28:02.413 [2024-11-27 04:44:58.876261] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=1 found seq_id=9 00:28:02.413 [2024-11-27 04:44:58.876348] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=2 found seq_id=12 00:28:02.413 [2024-11-27 04:44:58.876441] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=3 found seq_id=0 00:28:02.413 [2024-11-27 04:44:58.876456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:02.413 [2024-11-27 04:44:58.876464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Preprocess P2L checkpoints 00:28:02.413 [2024-11-27 04:44:58.876473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.373 ms 00:28:02.413 [2024-11-27 04:44:58.876480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:02.413 [2024-11-27 04:44:58.876544] mngt/ftl_mngt_recovery.c: 650:ftl_mngt_recovery_open_bands_p2l: *NOTICE*: [FTL][ftl] No more open bands to recover from P2L 00:28:02.413 [2024-11-27 04:44:58.876556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:02.413 [2024-11-27 04:44:58.876567] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open bands P2L 00:28:02.413 [2024-11-27 04:44:58.876575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.012 ms 00:28:02.413 [2024-11-27 04:44:58.876582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:02.413 [2024-11-27 04:44:58.890509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:02.413 [2024-11-27 04:44:58.890547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover chunk state 00:28:02.413 [2024-11-27 04:44:58.890557] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.905 ms 00:28:02.413 [2024-11-27 04:44:58.890565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:02.413 [2024-11-27 04:44:58.898850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:02.413 [2024-11-27 04:44:58.898881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover max seq ID 00:28:02.413 [2024-11-27 04:44:58.898890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:28:02.413 [2024-11-27 04:44:58.898897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:02.413 [2024-11-27 04:44:58.898977] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 262144, seq id 14 00:28:02.413 [2024-11-27 04:44:58.899103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:02.413 [2024-11-27 04:44:58.899114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:28:02.413 [2024-11-27 04:44:58.899122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.127 ms 00:28:02.413 [2024-11-27 04:44:58.899129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:02.978 [2024-11-27 04:44:59.337710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:02.978 [2024-11-27 04:44:59.337786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:28:02.978 [2024-11-27 04:44:59.337801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 437.833 ms 00:28:02.978 [2024-11-27 04:44:59.337809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:02.978 [2024-11-27 04:44:59.341515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:02.978 [2024-11-27 04:44:59.341552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:28:02.978 [2024-11-27 04:44:59.341563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.784 ms 00:28:02.978 [2024-11-27 04:44:59.341575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:02.978 [2024-11-27 04:44:59.341925] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 262144, seq id 14 00:28:02.978 [2024-11-27 04:44:59.341959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:02.978 [2024-11-27 04:44:59.341967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:28:02.978 [2024-11-27 04:44:59.341976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.357 ms 00:28:02.978 [2024-11-27 04:44:59.341984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:02.978 [2024-11-27 04:44:59.342045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:02.978 [2024-11-27 04:44:59.342056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:28:02.978 [2024-11-27 04:44:59.342064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:28:02.978 [2024-11-27 04:44:59.342075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:02.978 [2024-11-27 04:44:59.342108] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 443.132 ms, result 0 00:28:02.978 [2024-11-27 04:44:59.342149] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 524288, seq id 15 00:28:02.978 [2024-11-27 04:44:59.342235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:02.978 [2024-11-27 04:44:59.342245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:28:02.978 [2024-11-27 04:44:59.342253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.087 ms 00:28:02.978 [2024-11-27 04:44:59.342260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:03.236 [2024-11-27 04:44:59.792952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:03.236 [2024-11-27 04:44:59.793024] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:28:03.236 [2024-11-27 04:44:59.793049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 449.788 ms 00:28:03.236 [2024-11-27 04:44:59.793058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:03.236 [2024-11-27 04:44:59.797499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:03.236 [2024-11-27 04:44:59.797537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:28:03.236 [2024-11-27 04:44:59.797548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.073 ms 00:28:03.236 [2024-11-27 04:44:59.797555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:03.236 [2024-11-27 04:44:59.797888] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 524288, seq id 15 00:28:03.236 [2024-11-27 04:44:59.797920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:03.236 [2024-11-27 04:44:59.797929] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:28:03.236 [2024-11-27 04:44:59.797938] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.339 ms 00:28:03.236 [2024-11-27 04:44:59.797946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:03.236 [2024-11-27 04:44:59.797988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:03.236 [2024-11-27 04:44:59.797998] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:28:03.236 [2024-11-27 04:44:59.798006] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:28:03.236 [2024-11-27 04:44:59.798014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:03.236 [2024-11-27 04:44:59.798058] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 455.901 ms, result 0 00:28:03.236 [2024-11-27 04:44:59.798102] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 2, empty chunks = 2 00:28:03.236 [2024-11-27 04:44:59.798113] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:28:03.236 [2024-11-27 04:44:59.798122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:03.236 [2024-11-27 04:44:59.798130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open chunks P2L 00:28:03.236 [2024-11-27 04:44:59.798137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 899.163 ms 00:28:03.236 [2024-11-27 04:44:59.798145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:03.236 [2024-11-27 04:44:59.798174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:03.236 [2024-11-27 04:44:59.798185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize recovery 00:28:03.236 [2024-11-27 04:44:59.798193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:28:03.236 [2024-11-27 04:44:59.798200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:03.236 [2024-11-27 04:44:59.809091] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:28:03.236 [2024-11-27 04:44:59.809196] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:03.236 [2024-11-27 04:44:59.809206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:28:03.236 [2024-11-27 04:44:59.809215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.980 ms 00:28:03.236 [2024-11-27 04:44:59.809223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:03.236 [2024-11-27 04:44:59.809891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:03.236 [2024-11-27 04:44:59.809916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P from shared memory 00:28:03.236 [2024-11-27 04:44:59.809925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.602 ms 00:28:03.236 [2024-11-27 04:44:59.809932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:03.236 [2024-11-27 04:44:59.812169] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:03.236 [2024-11-27 04:44:59.812190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid maps counters 00:28:03.236 [2024-11-27 04:44:59.812200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.221 ms 00:28:03.236 [2024-11-27 04:44:59.812208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:03.236 [2024-11-27 04:44:59.812245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:03.236 [2024-11-27 04:44:59.812254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Complete trim transaction 00:28:03.236 [2024-11-27 04:44:59.812265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:28:03.236 [2024-11-27 04:44:59.812272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:03.236 [2024-11-27 04:44:59.812374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:03.236 [2024-11-27 04:44:59.812389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:28:03.236 [2024-11-27 04:44:59.812397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:28:03.236 [2024-11-27 04:44:59.812405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:03.236 [2024-11-27 04:44:59.812425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:03.236 [2024-11-27 04:44:59.812433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:28:03.236 [2024-11-27 04:44:59.812440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:28:03.236 [2024-11-27 04:44:59.812448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:03.236 [2024-11-27 04:44:59.812475] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:28:03.236 [2024-11-27 04:44:59.812489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:03.236 [2024-11-27 04:44:59.812496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:28:03.236 [2024-11-27 04:44:59.812503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:28:03.236 [2024-11-27 04:44:59.812510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:03.236 [2024-11-27 04:44:59.812560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:03.236 [2024-11-27 04:44:59.812574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:28:03.236 [2024-11-27 04:44:59.812582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.031 ms 00:28:03.236 [2024-11-27 04:44:59.812589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:03.236 [2024-11-27 04:44:59.813514] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 1134.930 ms, result 0 00:28:03.493 [2024-11-27 04:44:59.825867] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:03.493 [2024-11-27 04:44:59.841853] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:28:03.493 [2024-11-27 04:44:59.850021] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:28:03.751 04:45:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:03.751 04:45:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:28:03.751 04:45:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:28:03.751 04:45:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:28:03.751 04:45:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@116 -- # test_validate_checksum 00:28:03.751 04:45:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:28:03.751 04:45:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:28:03.751 04:45:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:28:03.751 Validate MD5 checksum, iteration 1 00:28:03.751 04:45:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:28:03.751 04:45:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:28:03.751 04:45:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:28:03.751 04:45:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:28:03.751 04:45:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:28:03.751 04:45:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:28:03.751 04:45:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:28:03.751 [2024-11-27 04:45:00.297011] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:28:03.751 [2024-11-27 04:45:00.297130] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80723 ] 00:28:04.009 [2024-11-27 04:45:00.453599] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:04.009 [2024-11-27 04:45:00.552372] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:05.906  [2024-11-27T04:45:02.751Z] Copying: 673/1024 [MB] (673 MBps) [2024-11-27T04:45:04.124Z] Copying: 1024/1024 [MB] (average 645 MBps) 00:28:07.537 00:28:07.537 04:45:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:28:07.537 04:45:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:28:10.099 04:45:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:28:10.099 04:45:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=44e60ff2755ee7aa951131185392f4f4 00:28:10.099 04:45:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 44e60ff2755ee7aa951131185392f4f4 != \4\4\e\6\0\f\f\2\7\5\5\e\e\7\a\a\9\5\1\1\3\1\1\8\5\3\9\2\f\4\f\4 ]] 00:28:10.099 04:45:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:28:10.099 04:45:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:28:10.099 04:45:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:28:10.099 Validate MD5 checksum, iteration 2 00:28:10.099 04:45:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:28:10.099 04:45:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:28:10.099 04:45:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:28:10.099 04:45:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:28:10.099 04:45:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:28:10.099 04:45:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:28:10.099 [2024-11-27 04:45:06.293150] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:28:10.099 [2024-11-27 04:45:06.293268] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80791 ] 00:28:10.099 [2024-11-27 04:45:06.449541] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:10.099 [2024-11-27 04:45:06.550941] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:11.993  [2024-11-27T04:45:08.580Z] Copying: 680/1024 [MB] (680 MBps) [2024-11-27T04:45:09.515Z] Copying: 1024/1024 [MB] (average 679 MBps) 00:28:12.928 00:28:12.928 04:45:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:28:12.928 04:45:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:28:14.830 04:45:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:28:14.830 04:45:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=431355f9c130cd249fae20154bcb4684 00:28:14.830 04:45:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 431355f9c130cd249fae20154bcb4684 != \4\3\1\3\5\5\f\9\c\1\3\0\c\d\2\4\9\f\a\e\2\0\1\5\4\b\c\b\4\6\8\4 ]] 00:28:14.830 04:45:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:28:14.831 04:45:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:28:14.831 04:45:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:28:14.831 04:45:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@119 -- # cleanup 00:28:14.831 04:45:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@11 -- # trap - SIGINT SIGTERM EXIT 00:28:14.831 04:45:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file 00:28:15.088 04:45:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@13 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file.md5 00:28:15.088 04:45:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@14 -- # tcp_cleanup 00:28:15.088 04:45:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@193 -- # tcp_target_cleanup 00:28:15.088 04:45:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@144 -- # tcp_target_shutdown 00:28:15.088 04:45:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 80684 ]] 00:28:15.088 04:45:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 80684 00:28:15.088 04:45:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 80684 ']' 00:28:15.088 04:45:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 80684 00:28:15.088 04:45:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:28:15.088 04:45:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:15.088 04:45:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80684 00:28:15.088 04:45:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:15.088 04:45:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:15.088 killing process with pid 80684 00:28:15.088 04:45:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80684' 00:28:15.088 04:45:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 80684 00:28:15.088 04:45:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 80684 00:28:15.656 [2024-11-27 04:45:12.020553] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:28:15.656 [2024-11-27 04:45:12.033026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:15.656 [2024-11-27 04:45:12.033061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:28:15.656 [2024-11-27 04:45:12.033072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:28:15.656 [2024-11-27 04:45:12.033079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:15.656 [2024-11-27 04:45:12.033097] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:28:15.656 [2024-11-27 04:45:12.035235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:15.656 [2024-11-27 04:45:12.035259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:28:15.656 [2024-11-27 04:45:12.035271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.127 ms 00:28:15.656 [2024-11-27 04:45:12.035278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:15.656 [2024-11-27 04:45:12.035447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:15.656 [2024-11-27 04:45:12.035461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:28:15.656 [2024-11-27 04:45:12.035468] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.153 ms 00:28:15.656 [2024-11-27 04:45:12.035474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:15.656 [2024-11-27 04:45:12.036384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:15.656 [2024-11-27 04:45:12.036403] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:28:15.656 [2024-11-27 04:45:12.036410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.899 ms 00:28:15.656 [2024-11-27 04:45:12.036420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:15.656 [2024-11-27 04:45:12.037368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:15.656 [2024-11-27 04:45:12.037384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:28:15.656 [2024-11-27 04:45:12.037391] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.925 ms 00:28:15.656 [2024-11-27 04:45:12.037397] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:15.656 [2024-11-27 04:45:12.044880] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:15.656 [2024-11-27 04:45:12.044905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:28:15.656 [2024-11-27 04:45:12.044917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.448 ms 00:28:15.656 [2024-11-27 04:45:12.044924] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:15.656 [2024-11-27 04:45:12.049360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:15.656 [2024-11-27 04:45:12.049385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:28:15.656 [2024-11-27 04:45:12.049393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.409 ms 00:28:15.656 [2024-11-27 04:45:12.049400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:15.656 [2024-11-27 04:45:12.049459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:15.656 [2024-11-27 04:45:12.049466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:28:15.656 [2024-11-27 04:45:12.049474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.031 ms 00:28:15.656 [2024-11-27 04:45:12.049484] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:15.656 [2024-11-27 04:45:12.056648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:15.656 [2024-11-27 04:45:12.056672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:28:15.656 [2024-11-27 04:45:12.056679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.151 ms 00:28:15.656 [2024-11-27 04:45:12.056685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:15.656 [2024-11-27 04:45:12.063860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:15.656 [2024-11-27 04:45:12.063883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:28:15.656 [2024-11-27 04:45:12.063891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.151 ms 00:28:15.656 [2024-11-27 04:45:12.063897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:15.656 [2024-11-27 04:45:12.071138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:15.656 [2024-11-27 04:45:12.071161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:28:15.656 [2024-11-27 04:45:12.071169] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.215 ms 00:28:15.656 [2024-11-27 04:45:12.071175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:15.656 [2024-11-27 04:45:12.078082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:15.656 [2024-11-27 04:45:12.078105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:28:15.656 [2024-11-27 04:45:12.078113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.862 ms 00:28:15.656 [2024-11-27 04:45:12.078118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:15.656 [2024-11-27 04:45:12.078143] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:28:15.656 [2024-11-27 04:45:12.078154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:28:15.656 [2024-11-27 04:45:12.078162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:28:15.656 [2024-11-27 04:45:12.078170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:28:15.656 [2024-11-27 04:45:12.078176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:28:15.656 [2024-11-27 04:45:12.078183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:28:15.656 [2024-11-27 04:45:12.078188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:28:15.656 [2024-11-27 04:45:12.078194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:28:15.656 [2024-11-27 04:45:12.078200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:28:15.656 [2024-11-27 04:45:12.078206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:28:15.656 [2024-11-27 04:45:12.078212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:28:15.656 [2024-11-27 04:45:12.078218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:28:15.656 [2024-11-27 04:45:12.078224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:28:15.656 [2024-11-27 04:45:12.078230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:28:15.656 [2024-11-27 04:45:12.078236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:28:15.656 [2024-11-27 04:45:12.078243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:28:15.656 [2024-11-27 04:45:12.078254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:28:15.656 [2024-11-27 04:45:12.078260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:28:15.656 [2024-11-27 04:45:12.078266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:28:15.656 [2024-11-27 04:45:12.078273] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:28:15.656 [2024-11-27 04:45:12.078279] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: e710666e-e67e-47bb-9c51-2525cd36c221 00:28:15.656 [2024-11-27 04:45:12.078285] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:28:15.656 [2024-11-27 04:45:12.078291] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 320 00:28:15.656 [2024-11-27 04:45:12.078297] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 0 00:28:15.656 [2024-11-27 04:45:12.078303] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: inf 00:28:15.656 [2024-11-27 04:45:12.078309] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:28:15.656 [2024-11-27 04:45:12.078315] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:28:15.656 [2024-11-27 04:45:12.078324] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:28:15.656 [2024-11-27 04:45:12.078329] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:28:15.656 [2024-11-27 04:45:12.078335] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:28:15.656 [2024-11-27 04:45:12.078340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:15.656 [2024-11-27 04:45:12.078345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:28:15.657 [2024-11-27 04:45:12.078352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.197 ms 00:28:15.657 [2024-11-27 04:45:12.078358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:15.657 [2024-11-27 04:45:12.088309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:15.657 [2024-11-27 04:45:12.088331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:28:15.657 [2024-11-27 04:45:12.088340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.938 ms 00:28:15.657 [2024-11-27 04:45:12.088346] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:15.657 [2024-11-27 04:45:12.088628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:15.657 [2024-11-27 04:45:12.088641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:28:15.657 [2024-11-27 04:45:12.088648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.263 ms 00:28:15.657 [2024-11-27 04:45:12.088654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:15.657 [2024-11-27 04:45:12.122397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:15.657 [2024-11-27 04:45:12.122425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:28:15.657 [2024-11-27 04:45:12.122433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:15.657 [2024-11-27 04:45:12.122443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:15.657 [2024-11-27 04:45:12.122475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:15.657 [2024-11-27 04:45:12.122482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:28:15.657 [2024-11-27 04:45:12.122491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:15.657 [2024-11-27 04:45:12.122498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:15.657 [2024-11-27 04:45:12.122572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:15.657 [2024-11-27 04:45:12.122580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:28:15.657 [2024-11-27 04:45:12.122587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:15.657 [2024-11-27 04:45:12.122593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:15.657 [2024-11-27 04:45:12.122609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:15.657 [2024-11-27 04:45:12.122615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:28:15.657 [2024-11-27 04:45:12.122622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:15.657 [2024-11-27 04:45:12.122628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:15.657 [2024-11-27 04:45:12.183603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:15.657 [2024-11-27 04:45:12.183636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:28:15.657 [2024-11-27 04:45:12.183645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:15.657 [2024-11-27 04:45:12.183651] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:15.657 [2024-11-27 04:45:12.232885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:15.657 [2024-11-27 04:45:12.232923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:28:15.657 [2024-11-27 04:45:12.232933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:15.657 [2024-11-27 04:45:12.232940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:15.657 [2024-11-27 04:45:12.233008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:15.657 [2024-11-27 04:45:12.233016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:28:15.657 [2024-11-27 04:45:12.233023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:15.657 [2024-11-27 04:45:12.233029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:15.657 [2024-11-27 04:45:12.233073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:15.657 [2024-11-27 04:45:12.233089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:28:15.657 [2024-11-27 04:45:12.233096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:15.657 [2024-11-27 04:45:12.233102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:15.657 [2024-11-27 04:45:12.233173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:15.657 [2024-11-27 04:45:12.233180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:28:15.657 [2024-11-27 04:45:12.233187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:15.657 [2024-11-27 04:45:12.233192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:15.657 [2024-11-27 04:45:12.233221] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:15.657 [2024-11-27 04:45:12.233229] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:28:15.657 [2024-11-27 04:45:12.233240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:15.657 [2024-11-27 04:45:12.233246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:15.657 [2024-11-27 04:45:12.233274] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:15.657 [2024-11-27 04:45:12.233280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:28:15.657 [2024-11-27 04:45:12.233287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:15.657 [2024-11-27 04:45:12.233292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:15.657 [2024-11-27 04:45:12.233323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:15.657 [2024-11-27 04:45:12.233334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:28:15.657 [2024-11-27 04:45:12.233340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:15.657 [2024-11-27 04:45:12.233346] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:15.657 [2024-11-27 04:45:12.233435] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 200.388 ms, result 0 00:28:17.031 04:45:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:28:17.031 04:45:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@145 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:28:17.031 04:45:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@194 -- # tcp_initiator_cleanup 00:28:17.031 04:45:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@188 -- # tcp_initiator_shutdown 00:28:17.031 04:45:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@181 -- # [[ -n '' ]] 00:28:17.031 04:45:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@189 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:28:17.031 04:45:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@15 -- # remove_shm 00:28:17.031 Remove shared memory files 00:28:17.031 04:45:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:28:17.031 04:45:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:28:17.031 04:45:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:28:17.031 04:45:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid80493 00:28:17.031 04:45:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:28:17.031 04:45:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:28:17.031 ************************************ 00:28:17.031 END TEST ftl_upgrade_shutdown 00:28:17.031 ************************************ 00:28:17.031 00:28:17.031 real 1m17.453s 00:28:17.031 user 1m48.042s 00:28:17.031 sys 0m17.350s 00:28:17.031 04:45:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:17.031 04:45:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:17.031 04:45:13 ftl -- ftl/ftl.sh@80 -- # [[ 1 -eq 1 ]] 00:28:17.031 04:45:13 ftl -- ftl/ftl.sh@81 -- # run_test ftl_restore_fast /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -f -c 0000:00:10.0 0000:00:11.0 00:28:17.031 04:45:13 ftl -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:28:17.031 04:45:13 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:17.031 04:45:13 ftl -- common/autotest_common.sh@10 -- # set +x 00:28:17.031 ************************************ 00:28:17.031 START TEST ftl_restore_fast 00:28:17.031 ************************************ 00:28:17.031 04:45:13 ftl.ftl_restore_fast -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -f -c 0000:00:10.0 0000:00:11.0 00:28:17.031 * Looking for test storage... 00:28:17.031 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:28:17.031 04:45:13 ftl.ftl_restore_fast -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:28:17.031 04:45:13 ftl.ftl_restore_fast -- common/autotest_common.sh@1693 -- # lcov --version 00:28:17.031 04:45:13 ftl.ftl_restore_fast -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:28:17.031 04:45:13 ftl.ftl_restore_fast -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:28:17.031 04:45:13 ftl.ftl_restore_fast -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:17.031 04:45:13 ftl.ftl_restore_fast -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:17.031 04:45:13 ftl.ftl_restore_fast -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:17.031 04:45:13 ftl.ftl_restore_fast -- scripts/common.sh@336 -- # IFS=.-: 00:28:17.031 04:45:13 ftl.ftl_restore_fast -- scripts/common.sh@336 -- # read -ra ver1 00:28:17.031 04:45:13 ftl.ftl_restore_fast -- scripts/common.sh@337 -- # IFS=.-: 00:28:17.031 04:45:13 ftl.ftl_restore_fast -- scripts/common.sh@337 -- # read -ra ver2 00:28:17.031 04:45:13 ftl.ftl_restore_fast -- scripts/common.sh@338 -- # local 'op=<' 00:28:17.031 04:45:13 ftl.ftl_restore_fast -- scripts/common.sh@340 -- # ver1_l=2 00:28:17.031 04:45:13 ftl.ftl_restore_fast -- scripts/common.sh@341 -- # ver2_l=1 00:28:17.031 04:45:13 ftl.ftl_restore_fast -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:17.031 04:45:13 ftl.ftl_restore_fast -- scripts/common.sh@344 -- # case "$op" in 00:28:17.031 04:45:13 ftl.ftl_restore_fast -- scripts/common.sh@345 -- # : 1 00:28:17.031 04:45:13 ftl.ftl_restore_fast -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:17.031 04:45:13 ftl.ftl_restore_fast -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:17.031 04:45:13 ftl.ftl_restore_fast -- scripts/common.sh@365 -- # decimal 1 00:28:17.031 04:45:13 ftl.ftl_restore_fast -- scripts/common.sh@353 -- # local d=1 00:28:17.031 04:45:13 ftl.ftl_restore_fast -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:17.031 04:45:13 ftl.ftl_restore_fast -- scripts/common.sh@355 -- # echo 1 00:28:17.031 04:45:13 ftl.ftl_restore_fast -- scripts/common.sh@365 -- # ver1[v]=1 00:28:17.031 04:45:13 ftl.ftl_restore_fast -- scripts/common.sh@366 -- # decimal 2 00:28:17.031 04:45:13 ftl.ftl_restore_fast -- scripts/common.sh@353 -- # local d=2 00:28:17.031 04:45:13 ftl.ftl_restore_fast -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:17.031 04:45:13 ftl.ftl_restore_fast -- scripts/common.sh@355 -- # echo 2 00:28:17.031 04:45:13 ftl.ftl_restore_fast -- scripts/common.sh@366 -- # ver2[v]=2 00:28:17.031 04:45:13 ftl.ftl_restore_fast -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:17.031 04:45:13 ftl.ftl_restore_fast -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:17.031 04:45:13 ftl.ftl_restore_fast -- scripts/common.sh@368 -- # return 0 00:28:17.031 04:45:13 ftl.ftl_restore_fast -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:17.031 04:45:13 ftl.ftl_restore_fast -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:28:17.031 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:17.031 --rc genhtml_branch_coverage=1 00:28:17.031 --rc genhtml_function_coverage=1 00:28:17.031 --rc genhtml_legend=1 00:28:17.031 --rc geninfo_all_blocks=1 00:28:17.031 --rc geninfo_unexecuted_blocks=1 00:28:17.031 00:28:17.031 ' 00:28:17.031 04:45:13 ftl.ftl_restore_fast -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:28:17.031 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:17.031 --rc genhtml_branch_coverage=1 00:28:17.031 --rc genhtml_function_coverage=1 00:28:17.031 --rc genhtml_legend=1 00:28:17.031 --rc geninfo_all_blocks=1 00:28:17.031 --rc geninfo_unexecuted_blocks=1 00:28:17.031 00:28:17.031 ' 00:28:17.031 04:45:13 ftl.ftl_restore_fast -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:28:17.031 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:17.031 --rc genhtml_branch_coverage=1 00:28:17.031 --rc genhtml_function_coverage=1 00:28:17.031 --rc genhtml_legend=1 00:28:17.031 --rc geninfo_all_blocks=1 00:28:17.031 --rc geninfo_unexecuted_blocks=1 00:28:17.031 00:28:17.031 ' 00:28:17.031 04:45:13 ftl.ftl_restore_fast -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:28:17.032 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:17.032 --rc genhtml_branch_coverage=1 00:28:17.032 --rc genhtml_function_coverage=1 00:28:17.032 --rc genhtml_legend=1 00:28:17.032 --rc geninfo_all_blocks=1 00:28:17.032 --rc geninfo_unexecuted_blocks=1 00:28:17.032 00:28:17.032 ' 00:28:17.032 04:45:13 ftl.ftl_restore_fast -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:28:17.032 04:45:13 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:28:17.032 04:45:13 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:28:17.032 04:45:13 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:28:17.032 04:45:13 ftl.ftl_restore_fast -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:28:17.032 04:45:13 ftl.ftl_restore_fast -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:28:17.032 04:45:13 ftl.ftl_restore_fast -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:28:17.032 04:45:13 ftl.ftl_restore_fast -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:28:17.032 04:45:13 ftl.ftl_restore_fast -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:28:17.032 04:45:13 ftl.ftl_restore_fast -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:17.032 04:45:13 ftl.ftl_restore_fast -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:17.032 04:45:13 ftl.ftl_restore_fast -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:28:17.032 04:45:13 ftl.ftl_restore_fast -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:28:17.032 04:45:13 ftl.ftl_restore_fast -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:28:17.032 04:45:13 ftl.ftl_restore_fast -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:28:17.032 04:45:13 ftl.ftl_restore_fast -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:28:17.032 04:45:13 ftl.ftl_restore_fast -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:28:17.032 04:45:13 ftl.ftl_restore_fast -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:17.032 04:45:13 ftl.ftl_restore_fast -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:17.032 04:45:13 ftl.ftl_restore_fast -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:28:17.032 04:45:13 ftl.ftl_restore_fast -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:28:17.032 04:45:13 ftl.ftl_restore_fast -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:28:17.032 04:45:13 ftl.ftl_restore_fast -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:28:17.032 04:45:13 ftl.ftl_restore_fast -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:28:17.032 04:45:13 ftl.ftl_restore_fast -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:28:17.032 04:45:13 ftl.ftl_restore_fast -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:28:17.032 04:45:13 ftl.ftl_restore_fast -- ftl/common.sh@23 -- # spdk_ini_pid= 00:28:17.032 04:45:13 ftl.ftl_restore_fast -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:17.032 04:45:13 ftl.ftl_restore_fast -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:17.032 04:45:13 ftl.ftl_restore_fast -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:28:17.032 04:45:13 ftl.ftl_restore_fast -- ftl/restore.sh@13 -- # mktemp -d 00:28:17.032 04:45:13 ftl.ftl_restore_fast -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.HZw9JYjhTV 00:28:17.032 04:45:13 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:28:17.032 04:45:13 ftl.ftl_restore_fast -- ftl/restore.sh@16 -- # case $opt in 00:28:17.032 04:45:13 ftl.ftl_restore_fast -- ftl/restore.sh@19 -- # fast_shutdown=1 00:28:17.032 04:45:13 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:28:17.032 04:45:13 ftl.ftl_restore_fast -- ftl/restore.sh@16 -- # case $opt in 00:28:17.032 04:45:13 ftl.ftl_restore_fast -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:28:17.032 04:45:13 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:28:17.032 04:45:13 ftl.ftl_restore_fast -- ftl/restore.sh@23 -- # shift 3 00:28:17.032 04:45:13 ftl.ftl_restore_fast -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:28:17.032 04:45:13 ftl.ftl_restore_fast -- ftl/restore.sh@25 -- # timeout=240 00:28:17.032 04:45:13 ftl.ftl_restore_fast -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:28:17.032 04:45:13 ftl.ftl_restore_fast -- ftl/restore.sh@39 -- # svcpid=80942 00:28:17.032 04:45:13 ftl.ftl_restore_fast -- ftl/restore.sh@41 -- # waitforlisten 80942 00:28:17.032 04:45:13 ftl.ftl_restore_fast -- common/autotest_common.sh@835 -- # '[' -z 80942 ']' 00:28:17.032 04:45:13 ftl.ftl_restore_fast -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:17.032 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:17.032 04:45:13 ftl.ftl_restore_fast -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:17.032 04:45:13 ftl.ftl_restore_fast -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:17.032 04:45:13 ftl.ftl_restore_fast -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:17.032 04:45:13 ftl.ftl_restore_fast -- common/autotest_common.sh@10 -- # set +x 00:28:17.032 04:45:13 ftl.ftl_restore_fast -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:17.032 [2024-11-27 04:45:13.455813] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:28:17.032 [2024-11-27 04:45:13.455911] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80942 ] 00:28:17.032 [2024-11-27 04:45:13.613473] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:17.341 [2024-11-27 04:45:13.761387] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:17.907 04:45:14 ftl.ftl_restore_fast -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:17.907 04:45:14 ftl.ftl_restore_fast -- common/autotest_common.sh@868 -- # return 0 00:28:17.907 04:45:14 ftl.ftl_restore_fast -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:28:17.907 04:45:14 ftl.ftl_restore_fast -- ftl/common.sh@54 -- # local name=nvme0 00:28:17.907 04:45:14 ftl.ftl_restore_fast -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:28:17.907 04:45:14 ftl.ftl_restore_fast -- ftl/common.sh@56 -- # local size=103424 00:28:17.907 04:45:14 ftl.ftl_restore_fast -- ftl/common.sh@59 -- # local base_bdev 00:28:17.907 04:45:14 ftl.ftl_restore_fast -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:28:18.165 04:45:14 ftl.ftl_restore_fast -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:28:18.165 04:45:14 ftl.ftl_restore_fast -- ftl/common.sh@62 -- # local base_size 00:28:18.165 04:45:14 ftl.ftl_restore_fast -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:28:18.165 04:45:14 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:28:18.165 04:45:14 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # local bdev_info 00:28:18.165 04:45:14 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # local bs 00:28:18.165 04:45:14 ftl.ftl_restore_fast -- common/autotest_common.sh@1385 -- # local nb 00:28:18.165 04:45:14 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:28:18.423 04:45:14 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:28:18.423 { 00:28:18.423 "name": "nvme0n1", 00:28:18.423 "aliases": [ 00:28:18.423 "eb270579-4313-4592-969c-3647ec8490c1" 00:28:18.423 ], 00:28:18.423 "product_name": "NVMe disk", 00:28:18.423 "block_size": 4096, 00:28:18.423 "num_blocks": 1310720, 00:28:18.423 "uuid": "eb270579-4313-4592-969c-3647ec8490c1", 00:28:18.423 "numa_id": -1, 00:28:18.423 "assigned_rate_limits": { 00:28:18.423 "rw_ios_per_sec": 0, 00:28:18.423 "rw_mbytes_per_sec": 0, 00:28:18.423 "r_mbytes_per_sec": 0, 00:28:18.423 "w_mbytes_per_sec": 0 00:28:18.423 }, 00:28:18.423 "claimed": true, 00:28:18.423 "claim_type": "read_many_write_one", 00:28:18.423 "zoned": false, 00:28:18.423 "supported_io_types": { 00:28:18.423 "read": true, 00:28:18.423 "write": true, 00:28:18.423 "unmap": true, 00:28:18.423 "flush": true, 00:28:18.423 "reset": true, 00:28:18.423 "nvme_admin": true, 00:28:18.423 "nvme_io": true, 00:28:18.423 "nvme_io_md": false, 00:28:18.423 "write_zeroes": true, 00:28:18.423 "zcopy": false, 00:28:18.423 "get_zone_info": false, 00:28:18.423 "zone_management": false, 00:28:18.423 "zone_append": false, 00:28:18.423 "compare": true, 00:28:18.423 "compare_and_write": false, 00:28:18.423 "abort": true, 00:28:18.423 "seek_hole": false, 00:28:18.423 "seek_data": false, 00:28:18.423 "copy": true, 00:28:18.423 "nvme_iov_md": false 00:28:18.423 }, 00:28:18.423 "driver_specific": { 00:28:18.423 "nvme": [ 00:28:18.423 { 00:28:18.423 "pci_address": "0000:00:11.0", 00:28:18.423 "trid": { 00:28:18.423 "trtype": "PCIe", 00:28:18.423 "traddr": "0000:00:11.0" 00:28:18.423 }, 00:28:18.423 "ctrlr_data": { 00:28:18.423 "cntlid": 0, 00:28:18.423 "vendor_id": "0x1b36", 00:28:18.423 "model_number": "QEMU NVMe Ctrl", 00:28:18.423 "serial_number": "12341", 00:28:18.423 "firmware_revision": "8.0.0", 00:28:18.423 "subnqn": "nqn.2019-08.org.qemu:12341", 00:28:18.423 "oacs": { 00:28:18.423 "security": 0, 00:28:18.423 "format": 1, 00:28:18.423 "firmware": 0, 00:28:18.423 "ns_manage": 1 00:28:18.423 }, 00:28:18.423 "multi_ctrlr": false, 00:28:18.423 "ana_reporting": false 00:28:18.423 }, 00:28:18.423 "vs": { 00:28:18.423 "nvme_version": "1.4" 00:28:18.423 }, 00:28:18.423 "ns_data": { 00:28:18.423 "id": 1, 00:28:18.423 "can_share": false 00:28:18.423 } 00:28:18.423 } 00:28:18.423 ], 00:28:18.423 "mp_policy": "active_passive" 00:28:18.423 } 00:28:18.423 } 00:28:18.423 ]' 00:28:18.423 04:45:14 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:28:18.423 04:45:14 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bs=4096 00:28:18.423 04:45:14 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:28:18.423 04:45:14 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # nb=1310720 00:28:18.423 04:45:14 ftl.ftl_restore_fast -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:28:18.423 04:45:14 ftl.ftl_restore_fast -- common/autotest_common.sh@1392 -- # echo 5120 00:28:18.423 04:45:14 ftl.ftl_restore_fast -- ftl/common.sh@63 -- # base_size=5120 00:28:18.423 04:45:14 ftl.ftl_restore_fast -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:28:18.423 04:45:14 ftl.ftl_restore_fast -- ftl/common.sh@67 -- # clear_lvols 00:28:18.423 04:45:14 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:28:18.423 04:45:14 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:28:18.681 04:45:15 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # stores=c1fb411d-2f61-4a10-aaf9-bf82de356247 00:28:18.681 04:45:15 ftl.ftl_restore_fast -- ftl/common.sh@29 -- # for lvs in $stores 00:28:18.681 04:45:15 ftl.ftl_restore_fast -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u c1fb411d-2f61-4a10-aaf9-bf82de356247 00:28:18.940 04:45:15 ftl.ftl_restore_fast -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:28:19.198 04:45:15 ftl.ftl_restore_fast -- ftl/common.sh@68 -- # lvs=0700349e-1560-4654-a93b-b0467ee18fca 00:28:19.198 04:45:15 ftl.ftl_restore_fast -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 0700349e-1560-4654-a93b-b0467ee18fca 00:28:19.198 04:45:15 ftl.ftl_restore_fast -- ftl/restore.sh@43 -- # split_bdev=5c2fa462-cdbc-410a-861d-693532938804 00:28:19.198 04:45:15 ftl.ftl_restore_fast -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:28:19.198 04:45:15 ftl.ftl_restore_fast -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 5c2fa462-cdbc-410a-861d-693532938804 00:28:19.198 04:45:15 ftl.ftl_restore_fast -- ftl/common.sh@35 -- # local name=nvc0 00:28:19.198 04:45:15 ftl.ftl_restore_fast -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:28:19.198 04:45:15 ftl.ftl_restore_fast -- ftl/common.sh@37 -- # local base_bdev=5c2fa462-cdbc-410a-861d-693532938804 00:28:19.198 04:45:15 ftl.ftl_restore_fast -- ftl/common.sh@38 -- # local cache_size= 00:28:19.198 04:45:15 ftl.ftl_restore_fast -- ftl/common.sh@41 -- # get_bdev_size 5c2fa462-cdbc-410a-861d-693532938804 00:28:19.198 04:45:15 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # local bdev_name=5c2fa462-cdbc-410a-861d-693532938804 00:28:19.198 04:45:15 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # local bdev_info 00:28:19.198 04:45:15 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # local bs 00:28:19.198 04:45:15 ftl.ftl_restore_fast -- common/autotest_common.sh@1385 -- # local nb 00:28:19.198 04:45:15 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 5c2fa462-cdbc-410a-861d-693532938804 00:28:19.457 04:45:15 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:28:19.457 { 00:28:19.457 "name": "5c2fa462-cdbc-410a-861d-693532938804", 00:28:19.457 "aliases": [ 00:28:19.457 "lvs/nvme0n1p0" 00:28:19.457 ], 00:28:19.457 "product_name": "Logical Volume", 00:28:19.457 "block_size": 4096, 00:28:19.457 "num_blocks": 26476544, 00:28:19.457 "uuid": "5c2fa462-cdbc-410a-861d-693532938804", 00:28:19.457 "assigned_rate_limits": { 00:28:19.457 "rw_ios_per_sec": 0, 00:28:19.457 "rw_mbytes_per_sec": 0, 00:28:19.457 "r_mbytes_per_sec": 0, 00:28:19.457 "w_mbytes_per_sec": 0 00:28:19.457 }, 00:28:19.457 "claimed": false, 00:28:19.457 "zoned": false, 00:28:19.457 "supported_io_types": { 00:28:19.457 "read": true, 00:28:19.457 "write": true, 00:28:19.457 "unmap": true, 00:28:19.457 "flush": false, 00:28:19.457 "reset": true, 00:28:19.457 "nvme_admin": false, 00:28:19.457 "nvme_io": false, 00:28:19.457 "nvme_io_md": false, 00:28:19.457 "write_zeroes": true, 00:28:19.457 "zcopy": false, 00:28:19.457 "get_zone_info": false, 00:28:19.457 "zone_management": false, 00:28:19.457 "zone_append": false, 00:28:19.457 "compare": false, 00:28:19.457 "compare_and_write": false, 00:28:19.457 "abort": false, 00:28:19.457 "seek_hole": true, 00:28:19.457 "seek_data": true, 00:28:19.457 "copy": false, 00:28:19.457 "nvme_iov_md": false 00:28:19.457 }, 00:28:19.457 "driver_specific": { 00:28:19.457 "lvol": { 00:28:19.457 "lvol_store_uuid": "0700349e-1560-4654-a93b-b0467ee18fca", 00:28:19.457 "base_bdev": "nvme0n1", 00:28:19.457 "thin_provision": true, 00:28:19.457 "num_allocated_clusters": 0, 00:28:19.457 "snapshot": false, 00:28:19.457 "clone": false, 00:28:19.457 "esnap_clone": false 00:28:19.457 } 00:28:19.457 } 00:28:19.457 } 00:28:19.457 ]' 00:28:19.457 04:45:15 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:28:19.457 04:45:15 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bs=4096 00:28:19.457 04:45:15 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:28:19.457 04:45:15 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # nb=26476544 00:28:19.457 04:45:15 ftl.ftl_restore_fast -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:28:19.457 04:45:15 ftl.ftl_restore_fast -- common/autotest_common.sh@1392 -- # echo 103424 00:28:19.457 04:45:15 ftl.ftl_restore_fast -- ftl/common.sh@41 -- # local base_size=5171 00:28:19.457 04:45:15 ftl.ftl_restore_fast -- ftl/common.sh@44 -- # local nvc_bdev 00:28:19.457 04:45:15 ftl.ftl_restore_fast -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:28:19.716 04:45:16 ftl.ftl_restore_fast -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:28:19.716 04:45:16 ftl.ftl_restore_fast -- ftl/common.sh@47 -- # [[ -z '' ]] 00:28:19.716 04:45:16 ftl.ftl_restore_fast -- ftl/common.sh@48 -- # get_bdev_size 5c2fa462-cdbc-410a-861d-693532938804 00:28:19.716 04:45:16 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # local bdev_name=5c2fa462-cdbc-410a-861d-693532938804 00:28:19.716 04:45:16 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # local bdev_info 00:28:19.716 04:45:16 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # local bs 00:28:19.716 04:45:16 ftl.ftl_restore_fast -- common/autotest_common.sh@1385 -- # local nb 00:28:19.716 04:45:16 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 5c2fa462-cdbc-410a-861d-693532938804 00:28:19.973 04:45:16 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:28:19.973 { 00:28:19.973 "name": "5c2fa462-cdbc-410a-861d-693532938804", 00:28:19.973 "aliases": [ 00:28:19.973 "lvs/nvme0n1p0" 00:28:19.973 ], 00:28:19.973 "product_name": "Logical Volume", 00:28:19.973 "block_size": 4096, 00:28:19.973 "num_blocks": 26476544, 00:28:19.973 "uuid": "5c2fa462-cdbc-410a-861d-693532938804", 00:28:19.973 "assigned_rate_limits": { 00:28:19.973 "rw_ios_per_sec": 0, 00:28:19.973 "rw_mbytes_per_sec": 0, 00:28:19.973 "r_mbytes_per_sec": 0, 00:28:19.973 "w_mbytes_per_sec": 0 00:28:19.973 }, 00:28:19.973 "claimed": false, 00:28:19.973 "zoned": false, 00:28:19.973 "supported_io_types": { 00:28:19.973 "read": true, 00:28:19.973 "write": true, 00:28:19.973 "unmap": true, 00:28:19.973 "flush": false, 00:28:19.973 "reset": true, 00:28:19.973 "nvme_admin": false, 00:28:19.973 "nvme_io": false, 00:28:19.973 "nvme_io_md": false, 00:28:19.973 "write_zeroes": true, 00:28:19.973 "zcopy": false, 00:28:19.973 "get_zone_info": false, 00:28:19.973 "zone_management": false, 00:28:19.973 "zone_append": false, 00:28:19.973 "compare": false, 00:28:19.973 "compare_and_write": false, 00:28:19.973 "abort": false, 00:28:19.973 "seek_hole": true, 00:28:19.973 "seek_data": true, 00:28:19.973 "copy": false, 00:28:19.974 "nvme_iov_md": false 00:28:19.974 }, 00:28:19.974 "driver_specific": { 00:28:19.974 "lvol": { 00:28:19.974 "lvol_store_uuid": "0700349e-1560-4654-a93b-b0467ee18fca", 00:28:19.974 "base_bdev": "nvme0n1", 00:28:19.974 "thin_provision": true, 00:28:19.974 "num_allocated_clusters": 0, 00:28:19.974 "snapshot": false, 00:28:19.974 "clone": false, 00:28:19.974 "esnap_clone": false 00:28:19.974 } 00:28:19.974 } 00:28:19.974 } 00:28:19.974 ]' 00:28:19.974 04:45:16 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:28:19.974 04:45:16 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bs=4096 00:28:19.974 04:45:16 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:28:19.974 04:45:16 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # nb=26476544 00:28:19.974 04:45:16 ftl.ftl_restore_fast -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:28:19.974 04:45:16 ftl.ftl_restore_fast -- common/autotest_common.sh@1392 -- # echo 103424 00:28:19.974 04:45:16 ftl.ftl_restore_fast -- ftl/common.sh@48 -- # cache_size=5171 00:28:19.974 04:45:16 ftl.ftl_restore_fast -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:28:20.231 04:45:16 ftl.ftl_restore_fast -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:28:20.231 04:45:16 ftl.ftl_restore_fast -- ftl/restore.sh@48 -- # get_bdev_size 5c2fa462-cdbc-410a-861d-693532938804 00:28:20.231 04:45:16 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # local bdev_name=5c2fa462-cdbc-410a-861d-693532938804 00:28:20.231 04:45:16 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # local bdev_info 00:28:20.231 04:45:16 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # local bs 00:28:20.231 04:45:16 ftl.ftl_restore_fast -- common/autotest_common.sh@1385 -- # local nb 00:28:20.231 04:45:16 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 5c2fa462-cdbc-410a-861d-693532938804 00:28:20.489 04:45:16 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:28:20.489 { 00:28:20.489 "name": "5c2fa462-cdbc-410a-861d-693532938804", 00:28:20.489 "aliases": [ 00:28:20.489 "lvs/nvme0n1p0" 00:28:20.489 ], 00:28:20.489 "product_name": "Logical Volume", 00:28:20.489 "block_size": 4096, 00:28:20.489 "num_blocks": 26476544, 00:28:20.489 "uuid": "5c2fa462-cdbc-410a-861d-693532938804", 00:28:20.489 "assigned_rate_limits": { 00:28:20.489 "rw_ios_per_sec": 0, 00:28:20.489 "rw_mbytes_per_sec": 0, 00:28:20.489 "r_mbytes_per_sec": 0, 00:28:20.489 "w_mbytes_per_sec": 0 00:28:20.489 }, 00:28:20.489 "claimed": false, 00:28:20.489 "zoned": false, 00:28:20.489 "supported_io_types": { 00:28:20.489 "read": true, 00:28:20.489 "write": true, 00:28:20.489 "unmap": true, 00:28:20.489 "flush": false, 00:28:20.489 "reset": true, 00:28:20.489 "nvme_admin": false, 00:28:20.489 "nvme_io": false, 00:28:20.489 "nvme_io_md": false, 00:28:20.489 "write_zeroes": true, 00:28:20.489 "zcopy": false, 00:28:20.489 "get_zone_info": false, 00:28:20.489 "zone_management": false, 00:28:20.489 "zone_append": false, 00:28:20.489 "compare": false, 00:28:20.489 "compare_and_write": false, 00:28:20.489 "abort": false, 00:28:20.489 "seek_hole": true, 00:28:20.489 "seek_data": true, 00:28:20.489 "copy": false, 00:28:20.489 "nvme_iov_md": false 00:28:20.489 }, 00:28:20.489 "driver_specific": { 00:28:20.489 "lvol": { 00:28:20.489 "lvol_store_uuid": "0700349e-1560-4654-a93b-b0467ee18fca", 00:28:20.489 "base_bdev": "nvme0n1", 00:28:20.489 "thin_provision": true, 00:28:20.489 "num_allocated_clusters": 0, 00:28:20.489 "snapshot": false, 00:28:20.489 "clone": false, 00:28:20.489 "esnap_clone": false 00:28:20.489 } 00:28:20.489 } 00:28:20.489 } 00:28:20.489 ]' 00:28:20.489 04:45:16 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:28:20.489 04:45:16 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bs=4096 00:28:20.489 04:45:16 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:28:20.489 04:45:16 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # nb=26476544 00:28:20.489 04:45:16 ftl.ftl_restore_fast -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:28:20.489 04:45:16 ftl.ftl_restore_fast -- common/autotest_common.sh@1392 -- # echo 103424 00:28:20.489 04:45:16 ftl.ftl_restore_fast -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:28:20.489 04:45:16 ftl.ftl_restore_fast -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 5c2fa462-cdbc-410a-861d-693532938804 --l2p_dram_limit 10' 00:28:20.489 04:45:16 ftl.ftl_restore_fast -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:28:20.489 04:45:16 ftl.ftl_restore_fast -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:28:20.489 04:45:16 ftl.ftl_restore_fast -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:28:20.489 04:45:16 ftl.ftl_restore_fast -- ftl/restore.sh@54 -- # '[' 1 -eq 1 ']' 00:28:20.489 04:45:16 ftl.ftl_restore_fast -- ftl/restore.sh@55 -- # ftl_construct_args+=' --fast-shutdown' 00:28:20.489 04:45:16 ftl.ftl_restore_fast -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 5c2fa462-cdbc-410a-861d-693532938804 --l2p_dram_limit 10 -c nvc0n1p0 --fast-shutdown 00:28:20.748 [2024-11-27 04:45:17.181347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:20.748 [2024-11-27 04:45:17.181488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:28:20.748 [2024-11-27 04:45:17.181508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:28:20.748 [2024-11-27 04:45:17.181516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:20.748 [2024-11-27 04:45:17.181579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:20.748 [2024-11-27 04:45:17.181588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:20.748 [2024-11-27 04:45:17.181597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:28:20.748 [2024-11-27 04:45:17.181603] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:20.748 [2024-11-27 04:45:17.181622] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:28:20.748 [2024-11-27 04:45:17.182196] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:28:20.748 [2024-11-27 04:45:17.182213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:20.748 [2024-11-27 04:45:17.182219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:20.748 [2024-11-27 04:45:17.182227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.594 ms 00:28:20.748 [2024-11-27 04:45:17.182233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:20.748 [2024-11-27 04:45:17.182289] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 77665fbb-ce7f-48bd-bba3-969864502bb5 00:28:20.748 [2024-11-27 04:45:17.183294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:20.748 [2024-11-27 04:45:17.183325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:28:20.748 [2024-11-27 04:45:17.183333] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:28:20.748 [2024-11-27 04:45:17.183341] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:20.748 [2024-11-27 04:45:17.188645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:20.748 [2024-11-27 04:45:17.188773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:20.748 [2024-11-27 04:45:17.188786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.273 ms 00:28:20.748 [2024-11-27 04:45:17.188793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:20.748 [2024-11-27 04:45:17.188864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:20.748 [2024-11-27 04:45:17.188873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:20.748 [2024-11-27 04:45:17.188880] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:28:20.748 [2024-11-27 04:45:17.188891] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:20.748 [2024-11-27 04:45:17.188928] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:20.748 [2024-11-27 04:45:17.188938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:28:20.748 [2024-11-27 04:45:17.188947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:28:20.748 [2024-11-27 04:45:17.188954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:20.748 [2024-11-27 04:45:17.188972] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:28:20.748 [2024-11-27 04:45:17.191996] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:20.748 [2024-11-27 04:45:17.192095] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:20.748 [2024-11-27 04:45:17.192111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.025 ms 00:28:20.748 [2024-11-27 04:45:17.192117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:20.748 [2024-11-27 04:45:17.192148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:20.748 [2024-11-27 04:45:17.192155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:28:20.748 [2024-11-27 04:45:17.192162] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:28:20.748 [2024-11-27 04:45:17.192169] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:20.748 [2024-11-27 04:45:17.192184] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:28:20.748 [2024-11-27 04:45:17.192294] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:28:20.748 [2024-11-27 04:45:17.192307] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:28:20.748 [2024-11-27 04:45:17.192316] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:28:20.748 [2024-11-27 04:45:17.192326] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:28:20.748 [2024-11-27 04:45:17.192333] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:28:20.748 [2024-11-27 04:45:17.192341] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:28:20.748 [2024-11-27 04:45:17.192349] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:28:20.748 [2024-11-27 04:45:17.192356] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:28:20.748 [2024-11-27 04:45:17.192361] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:28:20.748 [2024-11-27 04:45:17.192369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:20.748 [2024-11-27 04:45:17.192380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:28:20.748 [2024-11-27 04:45:17.192388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.186 ms 00:28:20.748 [2024-11-27 04:45:17.192394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:20.748 [2024-11-27 04:45:17.192463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:20.748 [2024-11-27 04:45:17.192469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:28:20.748 [2024-11-27 04:45:17.192477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:28:20.748 [2024-11-27 04:45:17.192482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:20.748 [2024-11-27 04:45:17.192572] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:28:20.748 [2024-11-27 04:45:17.192580] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:28:20.748 [2024-11-27 04:45:17.192588] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:20.748 [2024-11-27 04:45:17.192594] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:20.748 [2024-11-27 04:45:17.192603] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:28:20.748 [2024-11-27 04:45:17.192608] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:28:20.748 [2024-11-27 04:45:17.192615] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:28:20.748 [2024-11-27 04:45:17.192620] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:28:20.748 [2024-11-27 04:45:17.192627] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:28:20.748 [2024-11-27 04:45:17.192632] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:20.748 [2024-11-27 04:45:17.192639] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:28:20.748 [2024-11-27 04:45:17.192644] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:28:20.748 [2024-11-27 04:45:17.192651] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:20.748 [2024-11-27 04:45:17.192657] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:28:20.748 [2024-11-27 04:45:17.192664] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:28:20.748 [2024-11-27 04:45:17.192669] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:20.748 [2024-11-27 04:45:17.192678] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:28:20.748 [2024-11-27 04:45:17.192684] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:28:20.748 [2024-11-27 04:45:17.192692] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:20.748 [2024-11-27 04:45:17.192698] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:28:20.748 [2024-11-27 04:45:17.192704] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:28:20.748 [2024-11-27 04:45:17.192709] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:20.748 [2024-11-27 04:45:17.192716] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:28:20.749 [2024-11-27 04:45:17.192732] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:28:20.749 [2024-11-27 04:45:17.192739] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:20.749 [2024-11-27 04:45:17.192744] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:28:20.749 [2024-11-27 04:45:17.192751] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:28:20.749 [2024-11-27 04:45:17.192756] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:20.749 [2024-11-27 04:45:17.192763] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:28:20.749 [2024-11-27 04:45:17.192768] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:28:20.749 [2024-11-27 04:45:17.192775] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:20.749 [2024-11-27 04:45:17.192780] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:28:20.749 [2024-11-27 04:45:17.192789] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:28:20.749 [2024-11-27 04:45:17.192794] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:20.749 [2024-11-27 04:45:17.192800] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:28:20.749 [2024-11-27 04:45:17.192806] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:28:20.749 [2024-11-27 04:45:17.192812] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:20.749 [2024-11-27 04:45:17.192817] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:28:20.749 [2024-11-27 04:45:17.192824] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:28:20.749 [2024-11-27 04:45:17.192830] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:20.749 [2024-11-27 04:45:17.192836] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:28:20.749 [2024-11-27 04:45:17.192841] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:28:20.749 [2024-11-27 04:45:17.192848] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:20.749 [2024-11-27 04:45:17.192853] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:28:20.749 [2024-11-27 04:45:17.192861] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:28:20.749 [2024-11-27 04:45:17.192867] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:20.749 [2024-11-27 04:45:17.192874] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:20.749 [2024-11-27 04:45:17.192880] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:28:20.749 [2024-11-27 04:45:17.192889] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:28:20.749 [2024-11-27 04:45:17.192895] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:28:20.749 [2024-11-27 04:45:17.192903] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:28:20.749 [2024-11-27 04:45:17.192908] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:28:20.749 [2024-11-27 04:45:17.192915] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:28:20.749 [2024-11-27 04:45:17.192923] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:28:20.749 [2024-11-27 04:45:17.192933] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:20.749 [2024-11-27 04:45:17.192940] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:28:20.749 [2024-11-27 04:45:17.192948] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:28:20.749 [2024-11-27 04:45:17.192953] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:28:20.749 [2024-11-27 04:45:17.192960] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:28:20.749 [2024-11-27 04:45:17.192965] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:28:20.749 [2024-11-27 04:45:17.192972] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:28:20.749 [2024-11-27 04:45:17.192994] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:28:20.749 [2024-11-27 04:45:17.193001] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:28:20.749 [2024-11-27 04:45:17.193007] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:28:20.749 [2024-11-27 04:45:17.193015] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:28:20.749 [2024-11-27 04:45:17.193021] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:28:20.749 [2024-11-27 04:45:17.193029] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:28:20.749 [2024-11-27 04:45:17.193034] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:28:20.749 [2024-11-27 04:45:17.193041] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:28:20.749 [2024-11-27 04:45:17.193047] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:28:20.749 [2024-11-27 04:45:17.193055] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:20.749 [2024-11-27 04:45:17.193062] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:28:20.749 [2024-11-27 04:45:17.193069] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:28:20.749 [2024-11-27 04:45:17.193074] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:28:20.749 [2024-11-27 04:45:17.193082] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:28:20.749 [2024-11-27 04:45:17.193088] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:20.749 [2024-11-27 04:45:17.193096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:28:20.749 [2024-11-27 04:45:17.193102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.573 ms 00:28:20.749 [2024-11-27 04:45:17.193109] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:20.749 [2024-11-27 04:45:17.193154] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:28:20.749 [2024-11-27 04:45:17.193165] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:28:22.647 [2024-11-27 04:45:19.220223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:22.648 [2024-11-27 04:45:19.220277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:28:22.648 [2024-11-27 04:45:19.220292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2027.062 ms 00:28:22.648 [2024-11-27 04:45:19.220303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:22.905 [2024-11-27 04:45:19.245525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:22.905 [2024-11-27 04:45:19.245572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:22.905 [2024-11-27 04:45:19.245585] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.021 ms 00:28:22.905 [2024-11-27 04:45:19.245594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:22.905 [2024-11-27 04:45:19.245740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:22.905 [2024-11-27 04:45:19.245754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:28:22.905 [2024-11-27 04:45:19.245764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.080 ms 00:28:22.905 [2024-11-27 04:45:19.245777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:22.905 [2024-11-27 04:45:19.275904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:22.905 [2024-11-27 04:45:19.275947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:22.905 [2024-11-27 04:45:19.275958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.092 ms 00:28:22.905 [2024-11-27 04:45:19.275967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:22.905 [2024-11-27 04:45:19.276003] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:22.905 [2024-11-27 04:45:19.276012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:22.905 [2024-11-27 04:45:19.276021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:28:22.905 [2024-11-27 04:45:19.276035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:22.905 [2024-11-27 04:45:19.276375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:22.905 [2024-11-27 04:45:19.276392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:22.905 [2024-11-27 04:45:19.276401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.289 ms 00:28:22.905 [2024-11-27 04:45:19.276410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:22.905 [2024-11-27 04:45:19.276511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:22.905 [2024-11-27 04:45:19.276524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:22.905 [2024-11-27 04:45:19.276531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.083 ms 00:28:22.905 [2024-11-27 04:45:19.276542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:22.905 [2024-11-27 04:45:19.290508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:22.905 [2024-11-27 04:45:19.290541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:22.905 [2024-11-27 04:45:19.290550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.949 ms 00:28:22.905 [2024-11-27 04:45:19.290559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:22.905 [2024-11-27 04:45:19.313731] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:28:22.905 [2024-11-27 04:45:19.317180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:22.905 [2024-11-27 04:45:19.317224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:28:22.905 [2024-11-27 04:45:19.317243] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.544 ms 00:28:22.905 [2024-11-27 04:45:19.317254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:22.905 [2024-11-27 04:45:19.375019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:22.905 [2024-11-27 04:45:19.375069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:28:22.905 [2024-11-27 04:45:19.375084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 57.715 ms 00:28:22.905 [2024-11-27 04:45:19.375093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:22.905 [2024-11-27 04:45:19.375272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:22.905 [2024-11-27 04:45:19.375283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:28:22.905 [2024-11-27 04:45:19.375295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.141 ms 00:28:22.905 [2024-11-27 04:45:19.375303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:22.905 [2024-11-27 04:45:19.404579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:22.905 [2024-11-27 04:45:19.404620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:28:22.905 [2024-11-27 04:45:19.404635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.226 ms 00:28:22.905 [2024-11-27 04:45:19.404643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:22.905 [2024-11-27 04:45:19.426834] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:22.905 [2024-11-27 04:45:19.426969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:28:22.905 [2024-11-27 04:45:19.426991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.144 ms 00:28:22.905 [2024-11-27 04:45:19.426999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:22.905 [2024-11-27 04:45:19.427561] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:22.905 [2024-11-27 04:45:19.427579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:28:22.905 [2024-11-27 04:45:19.427592] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.528 ms 00:28:22.905 [2024-11-27 04:45:19.427600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:23.163 [2024-11-27 04:45:19.501194] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:23.163 [2024-11-27 04:45:19.501236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:28:23.163 [2024-11-27 04:45:19.501255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 73.555 ms 00:28:23.163 [2024-11-27 04:45:19.501265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:23.163 [2024-11-27 04:45:19.524882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:23.163 [2024-11-27 04:45:19.524919] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:28:23.163 [2024-11-27 04:45:19.524934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.542 ms 00:28:23.163 [2024-11-27 04:45:19.524943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:23.163 [2024-11-27 04:45:19.547974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:23.163 [2024-11-27 04:45:19.548006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:28:23.163 [2024-11-27 04:45:19.548019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.985 ms 00:28:23.163 [2024-11-27 04:45:19.548027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:23.163 [2024-11-27 04:45:19.571103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:23.163 [2024-11-27 04:45:19.571136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:28:23.163 [2024-11-27 04:45:19.571149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.038 ms 00:28:23.163 [2024-11-27 04:45:19.571157] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:23.163 [2024-11-27 04:45:19.571197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:23.163 [2024-11-27 04:45:19.571206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:28:23.163 [2024-11-27 04:45:19.571218] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:28:23.164 [2024-11-27 04:45:19.571225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:23.164 [2024-11-27 04:45:19.571299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:23.164 [2024-11-27 04:45:19.571311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:28:23.164 [2024-11-27 04:45:19.571320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:28:23.164 [2024-11-27 04:45:19.571327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:23.164 [2024-11-27 04:45:19.572450] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2390.697 ms, result 0 00:28:23.164 { 00:28:23.164 "name": "ftl0", 00:28:23.164 "uuid": "77665fbb-ce7f-48bd-bba3-969864502bb5" 00:28:23.164 } 00:28:23.164 04:45:19 ftl.ftl_restore_fast -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:28:23.164 04:45:19 ftl.ftl_restore_fast -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:28:23.422 04:45:19 ftl.ftl_restore_fast -- ftl/restore.sh@63 -- # echo ']}' 00:28:23.422 04:45:19 ftl.ftl_restore_fast -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:28:23.422 [2024-11-27 04:45:19.975817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:23.422 [2024-11-27 04:45:19.975873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:28:23.422 [2024-11-27 04:45:19.975886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:28:23.422 [2024-11-27 04:45:19.975895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:23.422 [2024-11-27 04:45:19.975919] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:28:23.422 [2024-11-27 04:45:19.978562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:23.422 [2024-11-27 04:45:19.978712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:28:23.422 [2024-11-27 04:45:19.978746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.624 ms 00:28:23.422 [2024-11-27 04:45:19.978755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:23.423 [2024-11-27 04:45:19.979021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:23.423 [2024-11-27 04:45:19.979030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:28:23.423 [2024-11-27 04:45:19.979039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.232 ms 00:28:23.423 [2024-11-27 04:45:19.979048] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:23.423 [2024-11-27 04:45:19.982278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:23.423 [2024-11-27 04:45:19.982300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:28:23.423 [2024-11-27 04:45:19.982311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.214 ms 00:28:23.423 [2024-11-27 04:45:19.982319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:23.423 [2024-11-27 04:45:19.988516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:23.423 [2024-11-27 04:45:19.988546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:28:23.423 [2024-11-27 04:45:19.988558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.176 ms 00:28:23.423 [2024-11-27 04:45:19.988567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:23.683 [2024-11-27 04:45:20.012103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:23.683 [2024-11-27 04:45:20.012230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:28:23.683 [2024-11-27 04:45:20.012250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.467 ms 00:28:23.683 [2024-11-27 04:45:20.012258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:23.683 [2024-11-27 04:45:20.026819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:23.683 [2024-11-27 04:45:20.026936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:28:23.683 [2024-11-27 04:45:20.026956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.521 ms 00:28:23.683 [2024-11-27 04:45:20.026965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:23.683 [2024-11-27 04:45:20.027127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:23.683 [2024-11-27 04:45:20.027138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:28:23.683 [2024-11-27 04:45:20.027148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.125 ms 00:28:23.683 [2024-11-27 04:45:20.027156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:23.683 [2024-11-27 04:45:20.049609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:23.683 [2024-11-27 04:45:20.049648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:28:23.683 [2024-11-27 04:45:20.049661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.431 ms 00:28:23.683 [2024-11-27 04:45:20.049668] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:23.683 [2024-11-27 04:45:20.072391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:23.683 [2024-11-27 04:45:20.072427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:28:23.683 [2024-11-27 04:45:20.072440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.683 ms 00:28:23.683 [2024-11-27 04:45:20.072447] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:23.683 [2024-11-27 04:45:20.095201] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:23.683 [2024-11-27 04:45:20.095234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:28:23.683 [2024-11-27 04:45:20.095246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.711 ms 00:28:23.683 [2024-11-27 04:45:20.095254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:23.683 [2024-11-27 04:45:20.117600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:23.683 [2024-11-27 04:45:20.117714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:28:23.683 [2024-11-27 04:45:20.117743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.271 ms 00:28:23.683 [2024-11-27 04:45:20.117751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:23.683 [2024-11-27 04:45:20.117784] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:28:23.683 [2024-11-27 04:45:20.117798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:28:23.683 [2024-11-27 04:45:20.117813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:28:23.683 [2024-11-27 04:45:20.117822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:28:23.683 [2024-11-27 04:45:20.117831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:28:23.683 [2024-11-27 04:45:20.117839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:28:23.683 [2024-11-27 04:45:20.117848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:28:23.683 [2024-11-27 04:45:20.117855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:28:23.683 [2024-11-27 04:45:20.117866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:28:23.683 [2024-11-27 04:45:20.117874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:28:23.683 [2024-11-27 04:45:20.117883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:28:23.683 [2024-11-27 04:45:20.117890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:28:23.683 [2024-11-27 04:45:20.117899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:28:23.683 [2024-11-27 04:45:20.117907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:28:23.683 [2024-11-27 04:45:20.117916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:28:23.683 [2024-11-27 04:45:20.117924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:28:23.683 [2024-11-27 04:45:20.117932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:28:23.683 [2024-11-27 04:45:20.117940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:28:23.683 [2024-11-27 04:45:20.117950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:28:23.683 [2024-11-27 04:45:20.117958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:28:23.683 [2024-11-27 04:45:20.117967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:28:23.683 [2024-11-27 04:45:20.117975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:28:23.683 [2024-11-27 04:45:20.117984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:28:23.683 [2024-11-27 04:45:20.117991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:28:23.683 [2024-11-27 04:45:20.118001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:28:23.683 [2024-11-27 04:45:20.118009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:28:23.683 [2024-11-27 04:45:20.118017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:28:23.683 [2024-11-27 04:45:20.118025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:28:23.683 [2024-11-27 04:45:20.118035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:28:23.683 [2024-11-27 04:45:20.118042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:28:23.683 [2024-11-27 04:45:20.118051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:28:23.683 [2024-11-27 04:45:20.118058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:28:23.683 [2024-11-27 04:45:20.118067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:28:23.683 [2024-11-27 04:45:20.118074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:28:23.684 [2024-11-27 04:45:20.118082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:28:23.684 [2024-11-27 04:45:20.118090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:28:23.684 [2024-11-27 04:45:20.118098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:28:23.684 [2024-11-27 04:45:20.118105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:28:23.684 [2024-11-27 04:45:20.118114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:28:23.684 [2024-11-27 04:45:20.118122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:28:23.684 [2024-11-27 04:45:20.118132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:28:23.684 [2024-11-27 04:45:20.118140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:28:23.684 [2024-11-27 04:45:20.118148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:28:23.684 [2024-11-27 04:45:20.118156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:28:23.684 [2024-11-27 04:45:20.118166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:28:23.684 [2024-11-27 04:45:20.118173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:28:23.684 [2024-11-27 04:45:20.118182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:28:23.684 [2024-11-27 04:45:20.118189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:28:23.684 [2024-11-27 04:45:20.118198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:28:23.684 [2024-11-27 04:45:20.118205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:28:23.684 [2024-11-27 04:45:20.118213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:28:23.684 [2024-11-27 04:45:20.118220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:28:23.684 [2024-11-27 04:45:20.118229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:28:23.684 [2024-11-27 04:45:20.118236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:28:23.684 [2024-11-27 04:45:20.118245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:28:23.684 [2024-11-27 04:45:20.118252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:28:23.684 [2024-11-27 04:45:20.118268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:28:23.684 [2024-11-27 04:45:20.118276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:28:23.684 [2024-11-27 04:45:20.118284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:28:23.684 [2024-11-27 04:45:20.118292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:28:23.684 [2024-11-27 04:45:20.118300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:28:23.684 [2024-11-27 04:45:20.118307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:28:23.684 [2024-11-27 04:45:20.118316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:28:23.684 [2024-11-27 04:45:20.118323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:28:23.684 [2024-11-27 04:45:20.118332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:28:23.684 [2024-11-27 04:45:20.118339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:28:23.684 [2024-11-27 04:45:20.118347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:28:23.684 [2024-11-27 04:45:20.118354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:28:23.684 [2024-11-27 04:45:20.118363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:28:23.684 [2024-11-27 04:45:20.118370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:28:23.684 [2024-11-27 04:45:20.118379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:28:23.684 [2024-11-27 04:45:20.118387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:28:23.684 [2024-11-27 04:45:20.118399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:28:23.684 [2024-11-27 04:45:20.118406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:28:23.684 [2024-11-27 04:45:20.118415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:28:23.684 [2024-11-27 04:45:20.118422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:28:23.684 [2024-11-27 04:45:20.118431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:28:23.684 [2024-11-27 04:45:20.118438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:28:23.684 [2024-11-27 04:45:20.118447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:28:23.684 [2024-11-27 04:45:20.118455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:28:23.684 [2024-11-27 04:45:20.118463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:28:23.684 [2024-11-27 04:45:20.118471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:28:23.684 [2024-11-27 04:45:20.118480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:28:23.684 [2024-11-27 04:45:20.118487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:28:23.684 [2024-11-27 04:45:20.118495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:28:23.684 [2024-11-27 04:45:20.118502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:28:23.684 [2024-11-27 04:45:20.118511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:28:23.684 [2024-11-27 04:45:20.118519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:28:23.684 [2024-11-27 04:45:20.118529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:28:23.684 [2024-11-27 04:45:20.118536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:28:23.684 [2024-11-27 04:45:20.118544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:28:23.684 [2024-11-27 04:45:20.118552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:28:23.684 [2024-11-27 04:45:20.118560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:28:23.684 [2024-11-27 04:45:20.118568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:28:23.684 [2024-11-27 04:45:20.118577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:28:23.684 [2024-11-27 04:45:20.118584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:28:23.684 [2024-11-27 04:45:20.118594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:28:23.684 [2024-11-27 04:45:20.118601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:28:23.684 [2024-11-27 04:45:20.118609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:28:23.684 [2024-11-27 04:45:20.118617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:28:23.684 [2024-11-27 04:45:20.118625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:28:23.684 [2024-11-27 04:45:20.118641] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:28:23.684 [2024-11-27 04:45:20.118650] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 77665fbb-ce7f-48bd-bba3-969864502bb5 00:28:23.684 [2024-11-27 04:45:20.118658] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:28:23.684 [2024-11-27 04:45:20.118668] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:28:23.684 [2024-11-27 04:45:20.118677] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:28:23.684 [2024-11-27 04:45:20.118686] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:28:23.684 [2024-11-27 04:45:20.118693] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:28:23.684 [2024-11-27 04:45:20.118702] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:28:23.684 [2024-11-27 04:45:20.118709] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:28:23.684 [2024-11-27 04:45:20.118716] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:28:23.684 [2024-11-27 04:45:20.118738] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:28:23.684 [2024-11-27 04:45:20.118747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:23.684 [2024-11-27 04:45:20.118755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:28:23.684 [2024-11-27 04:45:20.118765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.965 ms 00:28:23.684 [2024-11-27 04:45:20.118773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:23.684 [2024-11-27 04:45:20.131344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:23.684 [2024-11-27 04:45:20.131373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:28:23.684 [2024-11-27 04:45:20.131385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.536 ms 00:28:23.684 [2024-11-27 04:45:20.131393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:23.684 [2024-11-27 04:45:20.131743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:23.684 [2024-11-27 04:45:20.131767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:28:23.684 [2024-11-27 04:45:20.131780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.327 ms 00:28:23.684 [2024-11-27 04:45:20.131787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:23.684 [2024-11-27 04:45:20.173616] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:23.684 [2024-11-27 04:45:20.173648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:23.684 [2024-11-27 04:45:20.173660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:23.684 [2024-11-27 04:45:20.173668] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:23.684 [2024-11-27 04:45:20.173740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:23.685 [2024-11-27 04:45:20.173750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:23.685 [2024-11-27 04:45:20.173761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:23.685 [2024-11-27 04:45:20.173768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:23.685 [2024-11-27 04:45:20.173836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:23.685 [2024-11-27 04:45:20.173846] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:23.685 [2024-11-27 04:45:20.173855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:23.685 [2024-11-27 04:45:20.173862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:23.685 [2024-11-27 04:45:20.173883] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:23.685 [2024-11-27 04:45:20.173890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:23.685 [2024-11-27 04:45:20.173899] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:23.685 [2024-11-27 04:45:20.173908] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:23.685 [2024-11-27 04:45:20.251958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:23.685 [2024-11-27 04:45:20.251999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:23.685 [2024-11-27 04:45:20.252013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:23.685 [2024-11-27 04:45:20.252021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:23.944 [2024-11-27 04:45:20.315509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:23.944 [2024-11-27 04:45:20.315653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:23.944 [2024-11-27 04:45:20.315676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:23.944 [2024-11-27 04:45:20.315684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:23.944 [2024-11-27 04:45:20.315795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:23.944 [2024-11-27 04:45:20.315806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:23.944 [2024-11-27 04:45:20.315815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:23.944 [2024-11-27 04:45:20.315823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:23.944 [2024-11-27 04:45:20.315870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:23.944 [2024-11-27 04:45:20.315880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:23.944 [2024-11-27 04:45:20.315889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:23.944 [2024-11-27 04:45:20.315896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:23.944 [2024-11-27 04:45:20.315994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:23.944 [2024-11-27 04:45:20.316004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:23.944 [2024-11-27 04:45:20.316013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:23.944 [2024-11-27 04:45:20.316020] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:23.944 [2024-11-27 04:45:20.316054] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:23.944 [2024-11-27 04:45:20.316063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:28:23.944 [2024-11-27 04:45:20.316072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:23.944 [2024-11-27 04:45:20.316079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:23.944 [2024-11-27 04:45:20.316116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:23.944 [2024-11-27 04:45:20.316125] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:23.944 [2024-11-27 04:45:20.316133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:23.944 [2024-11-27 04:45:20.316140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:23.944 [2024-11-27 04:45:20.316185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:23.944 [2024-11-27 04:45:20.316194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:23.944 [2024-11-27 04:45:20.316203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:23.944 [2024-11-27 04:45:20.316210] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:23.944 [2024-11-27 04:45:20.316330] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 340.485 ms, result 0 00:28:23.944 true 00:28:23.944 04:45:20 ftl.ftl_restore_fast -- ftl/restore.sh@66 -- # killprocess 80942 00:28:23.944 04:45:20 ftl.ftl_restore_fast -- common/autotest_common.sh@954 -- # '[' -z 80942 ']' 00:28:23.944 04:45:20 ftl.ftl_restore_fast -- common/autotest_common.sh@958 -- # kill -0 80942 00:28:23.944 04:45:20 ftl.ftl_restore_fast -- common/autotest_common.sh@959 -- # uname 00:28:23.944 04:45:20 ftl.ftl_restore_fast -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:23.944 04:45:20 ftl.ftl_restore_fast -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80942 00:28:23.945 killing process with pid 80942 00:28:23.945 04:45:20 ftl.ftl_restore_fast -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:23.945 04:45:20 ftl.ftl_restore_fast -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:23.945 04:45:20 ftl.ftl_restore_fast -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80942' 00:28:23.945 04:45:20 ftl.ftl_restore_fast -- common/autotest_common.sh@973 -- # kill 80942 00:28:23.945 04:45:20 ftl.ftl_restore_fast -- common/autotest_common.sh@978 -- # wait 80942 00:28:30.497 04:45:26 ftl.ftl_restore_fast -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:28:33.024 262144+0 records in 00:28:33.024 262144+0 records out 00:28:33.024 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 3.43795 s, 312 MB/s 00:28:33.024 04:45:29 ftl.ftl_restore_fast -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:28:35.553 04:45:31 ftl.ftl_restore_fast -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:28:35.553 [2024-11-27 04:45:31.677856] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:28:35.553 [2024-11-27 04:45:31.678061] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81150 ] 00:28:35.553 [2024-11-27 04:45:31.849660] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:35.553 [2024-11-27 04:45:31.997485] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:35.836 [2024-11-27 04:45:32.252601] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:35.836 [2024-11-27 04:45:32.252666] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:35.836 [2024-11-27 04:45:32.405556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:35.836 [2024-11-27 04:45:32.405606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:28:35.836 [2024-11-27 04:45:32.405619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:28:35.836 [2024-11-27 04:45:32.405627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.836 [2024-11-27 04:45:32.405672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:35.836 [2024-11-27 04:45:32.405684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:35.836 [2024-11-27 04:45:32.405692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:28:35.836 [2024-11-27 04:45:32.405700] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.836 [2024-11-27 04:45:32.405719] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:28:35.836 [2024-11-27 04:45:32.406386] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:28:35.836 [2024-11-27 04:45:32.406512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:35.836 [2024-11-27 04:45:32.406522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:35.836 [2024-11-27 04:45:32.406531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.797 ms 00:28:35.836 [2024-11-27 04:45:32.406538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.836 [2024-11-27 04:45:32.407565] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:28:35.836 [2024-11-27 04:45:32.419795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:35.836 [2024-11-27 04:45:32.419827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:28:35.836 [2024-11-27 04:45:32.419839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.232 ms 00:28:35.836 [2024-11-27 04:45:32.419848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.836 [2024-11-27 04:45:32.419903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:35.836 [2024-11-27 04:45:32.419913] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:28:35.836 [2024-11-27 04:45:32.419921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:28:35.836 [2024-11-27 04:45:32.419928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:36.095 [2024-11-27 04:45:32.424631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:36.095 [2024-11-27 04:45:32.424663] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:36.095 [2024-11-27 04:45:32.424672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.646 ms 00:28:36.095 [2024-11-27 04:45:32.424683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:36.095 [2024-11-27 04:45:32.424763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:36.095 [2024-11-27 04:45:32.424773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:36.095 [2024-11-27 04:45:32.424781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:28:36.095 [2024-11-27 04:45:32.424788] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:36.095 [2024-11-27 04:45:32.424835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:36.095 [2024-11-27 04:45:32.424845] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:28:36.095 [2024-11-27 04:45:32.424852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:28:36.095 [2024-11-27 04:45:32.424860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:36.095 [2024-11-27 04:45:32.424884] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:28:36.095 [2024-11-27 04:45:32.428178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:36.095 [2024-11-27 04:45:32.428205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:36.095 [2024-11-27 04:45:32.428217] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.299 ms 00:28:36.095 [2024-11-27 04:45:32.428225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:36.095 [2024-11-27 04:45:32.428251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:36.095 [2024-11-27 04:45:32.428260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:28:36.095 [2024-11-27 04:45:32.428267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:28:36.095 [2024-11-27 04:45:32.428275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:36.095 [2024-11-27 04:45:32.428293] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:28:36.095 [2024-11-27 04:45:32.428311] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:28:36.095 [2024-11-27 04:45:32.428344] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:28:36.095 [2024-11-27 04:45:32.428360] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:28:36.095 [2024-11-27 04:45:32.428462] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:28:36.095 [2024-11-27 04:45:32.428472] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:28:36.095 [2024-11-27 04:45:32.428482] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:28:36.095 [2024-11-27 04:45:32.428492] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:28:36.095 [2024-11-27 04:45:32.428501] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:28:36.095 [2024-11-27 04:45:32.428509] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:28:36.095 [2024-11-27 04:45:32.428515] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:28:36.095 [2024-11-27 04:45:32.428525] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:28:36.095 [2024-11-27 04:45:32.428531] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:28:36.095 [2024-11-27 04:45:32.428538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:36.095 [2024-11-27 04:45:32.428545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:28:36.095 [2024-11-27 04:45:32.428553] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.247 ms 00:28:36.095 [2024-11-27 04:45:32.428559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:36.095 [2024-11-27 04:45:32.428642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:36.095 [2024-11-27 04:45:32.428649] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:28:36.095 [2024-11-27 04:45:32.428656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:28:36.095 [2024-11-27 04:45:32.428663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:36.095 [2024-11-27 04:45:32.428781] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:28:36.095 [2024-11-27 04:45:32.428792] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:28:36.095 [2024-11-27 04:45:32.428800] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:36.095 [2024-11-27 04:45:32.428808] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:36.095 [2024-11-27 04:45:32.428815] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:28:36.095 [2024-11-27 04:45:32.428821] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:28:36.096 [2024-11-27 04:45:32.428829] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:28:36.096 [2024-11-27 04:45:32.428837] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:28:36.096 [2024-11-27 04:45:32.428844] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:28:36.096 [2024-11-27 04:45:32.428851] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:36.096 [2024-11-27 04:45:32.428858] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:28:36.096 [2024-11-27 04:45:32.428865] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:28:36.096 [2024-11-27 04:45:32.428871] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:36.096 [2024-11-27 04:45:32.428882] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:28:36.096 [2024-11-27 04:45:32.428889] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:28:36.096 [2024-11-27 04:45:32.428895] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:36.096 [2024-11-27 04:45:32.428903] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:28:36.096 [2024-11-27 04:45:32.428910] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:28:36.096 [2024-11-27 04:45:32.428916] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:36.096 [2024-11-27 04:45:32.428922] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:28:36.096 [2024-11-27 04:45:32.428928] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:28:36.096 [2024-11-27 04:45:32.428935] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:36.096 [2024-11-27 04:45:32.428941] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:28:36.096 [2024-11-27 04:45:32.428947] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:28:36.096 [2024-11-27 04:45:32.428954] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:36.096 [2024-11-27 04:45:32.428960] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:28:36.096 [2024-11-27 04:45:32.428966] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:28:36.096 [2024-11-27 04:45:32.428972] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:36.096 [2024-11-27 04:45:32.428994] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:28:36.096 [2024-11-27 04:45:32.429001] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:28:36.096 [2024-11-27 04:45:32.429007] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:36.096 [2024-11-27 04:45:32.429014] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:28:36.096 [2024-11-27 04:45:32.429020] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:28:36.096 [2024-11-27 04:45:32.429027] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:36.096 [2024-11-27 04:45:32.429033] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:28:36.096 [2024-11-27 04:45:32.429039] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:28:36.096 [2024-11-27 04:45:32.429046] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:36.096 [2024-11-27 04:45:32.429052] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:28:36.096 [2024-11-27 04:45:32.429059] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:28:36.096 [2024-11-27 04:45:32.429065] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:36.096 [2024-11-27 04:45:32.429071] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:28:36.096 [2024-11-27 04:45:32.429077] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:28:36.096 [2024-11-27 04:45:32.429084] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:36.096 [2024-11-27 04:45:32.429090] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:28:36.096 [2024-11-27 04:45:32.429097] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:28:36.096 [2024-11-27 04:45:32.429104] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:36.096 [2024-11-27 04:45:32.429111] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:36.096 [2024-11-27 04:45:32.429118] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:28:36.096 [2024-11-27 04:45:32.429126] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:28:36.096 [2024-11-27 04:45:32.429133] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:28:36.096 [2024-11-27 04:45:32.429140] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:28:36.096 [2024-11-27 04:45:32.429146] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:28:36.096 [2024-11-27 04:45:32.429153] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:28:36.096 [2024-11-27 04:45:32.429161] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:28:36.096 [2024-11-27 04:45:32.429170] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:36.096 [2024-11-27 04:45:32.429180] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:28:36.096 [2024-11-27 04:45:32.429187] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:28:36.096 [2024-11-27 04:45:32.429194] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:28:36.096 [2024-11-27 04:45:32.429201] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:28:36.096 [2024-11-27 04:45:32.429208] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:28:36.096 [2024-11-27 04:45:32.429215] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:28:36.096 [2024-11-27 04:45:32.429221] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:28:36.096 [2024-11-27 04:45:32.429229] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:28:36.096 [2024-11-27 04:45:32.429236] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:28:36.096 [2024-11-27 04:45:32.429243] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:28:36.096 [2024-11-27 04:45:32.429251] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:28:36.096 [2024-11-27 04:45:32.429258] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:28:36.096 [2024-11-27 04:45:32.429265] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:28:36.096 [2024-11-27 04:45:32.429273] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:28:36.096 [2024-11-27 04:45:32.429279] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:28:36.096 [2024-11-27 04:45:32.429287] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:36.096 [2024-11-27 04:45:32.429295] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:28:36.096 [2024-11-27 04:45:32.429302] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:28:36.096 [2024-11-27 04:45:32.429310] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:28:36.096 [2024-11-27 04:45:32.429317] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:28:36.096 [2024-11-27 04:45:32.429324] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:36.096 [2024-11-27 04:45:32.429331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:28:36.096 [2024-11-27 04:45:32.429338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.628 ms 00:28:36.097 [2024-11-27 04:45:32.429346] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:36.097 [2024-11-27 04:45:32.455188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:36.097 [2024-11-27 04:45:32.455223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:36.097 [2024-11-27 04:45:32.455234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.785 ms 00:28:36.097 [2024-11-27 04:45:32.455245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:36.097 [2024-11-27 04:45:32.455325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:36.097 [2024-11-27 04:45:32.455334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:28:36.097 [2024-11-27 04:45:32.455341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:28:36.097 [2024-11-27 04:45:32.455349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:36.097 [2024-11-27 04:45:32.500079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:36.097 [2024-11-27 04:45:32.500123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:36.097 [2024-11-27 04:45:32.500135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.676 ms 00:28:36.097 [2024-11-27 04:45:32.500144] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:36.097 [2024-11-27 04:45:32.500191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:36.097 [2024-11-27 04:45:32.500201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:36.097 [2024-11-27 04:45:32.500212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:28:36.097 [2024-11-27 04:45:32.500220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:36.097 [2024-11-27 04:45:32.500582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:36.097 [2024-11-27 04:45:32.500605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:36.097 [2024-11-27 04:45:32.500615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.294 ms 00:28:36.097 [2024-11-27 04:45:32.500623] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:36.097 [2024-11-27 04:45:32.500760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:36.097 [2024-11-27 04:45:32.501268] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:36.097 [2024-11-27 04:45:32.501300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.115 ms 00:28:36.097 [2024-11-27 04:45:32.501309] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:36.097 [2024-11-27 04:45:32.514209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:36.097 [2024-11-27 04:45:32.514244] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:36.097 [2024-11-27 04:45:32.514254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.862 ms 00:28:36.097 [2024-11-27 04:45:32.514262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:36.097 [2024-11-27 04:45:32.526465] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:28:36.097 [2024-11-27 04:45:32.526500] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:28:36.097 [2024-11-27 04:45:32.526512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:36.097 [2024-11-27 04:45:32.526520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:28:36.097 [2024-11-27 04:45:32.526530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.153 ms 00:28:36.097 [2024-11-27 04:45:32.526537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:36.097 [2024-11-27 04:45:32.550992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:36.097 [2024-11-27 04:45:32.551032] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:28:36.097 [2024-11-27 04:45:32.551044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.418 ms 00:28:36.097 [2024-11-27 04:45:32.551051] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:36.097 [2024-11-27 04:45:32.562737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:36.097 [2024-11-27 04:45:32.562768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:28:36.097 [2024-11-27 04:45:32.562777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.645 ms 00:28:36.097 [2024-11-27 04:45:32.562784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:36.097 [2024-11-27 04:45:32.573828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:36.097 [2024-11-27 04:45:32.573859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:28:36.097 [2024-11-27 04:45:32.573869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.013 ms 00:28:36.097 [2024-11-27 04:45:32.573877] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:36.097 [2024-11-27 04:45:32.574476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:36.097 [2024-11-27 04:45:32.574494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:28:36.097 [2024-11-27 04:45:32.574503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.520 ms 00:28:36.097 [2024-11-27 04:45:32.574512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:36.097 [2024-11-27 04:45:32.628529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:36.097 [2024-11-27 04:45:32.628710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:28:36.097 [2024-11-27 04:45:32.628750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 53.999 ms 00:28:36.097 [2024-11-27 04:45:32.628764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:36.097 [2024-11-27 04:45:32.639539] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:28:36.097 [2024-11-27 04:45:32.641850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:36.097 [2024-11-27 04:45:32.641880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:28:36.097 [2024-11-27 04:45:32.641891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.047 ms 00:28:36.097 [2024-11-27 04:45:32.641901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:36.097 [2024-11-27 04:45:32.641991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:36.097 [2024-11-27 04:45:32.642001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:28:36.097 [2024-11-27 04:45:32.642010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:28:36.097 [2024-11-27 04:45:32.642018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:36.097 [2024-11-27 04:45:32.642085] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:36.097 [2024-11-27 04:45:32.642095] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:28:36.097 [2024-11-27 04:45:32.642104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:28:36.097 [2024-11-27 04:45:32.642111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:36.097 [2024-11-27 04:45:32.642129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:36.097 [2024-11-27 04:45:32.642137] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:28:36.097 [2024-11-27 04:45:32.642145] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:28:36.097 [2024-11-27 04:45:32.642151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:36.097 [2024-11-27 04:45:32.642180] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:28:36.097 [2024-11-27 04:45:32.642191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:36.097 [2024-11-27 04:45:32.642199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:28:36.097 [2024-11-27 04:45:32.642207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:28:36.097 [2024-11-27 04:45:32.642214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:36.097 [2024-11-27 04:45:32.665474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:36.097 [2024-11-27 04:45:32.665588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:28:36.097 [2024-11-27 04:45:32.665638] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.244 ms 00:28:36.097 [2024-11-27 04:45:32.665665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:36.097 [2024-11-27 04:45:32.665783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:36.097 [2024-11-27 04:45:32.665844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:28:36.098 [2024-11-27 04:45:32.665891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:28:36.098 [2024-11-27 04:45:32.665914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:36.098 [2024-11-27 04:45:32.666824] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 260.849 ms, result 0 00:28:37.471  [2024-11-27T04:45:34.991Z] Copying: 46/1024 [MB] (46 MBps) [2024-11-27T04:45:35.925Z] Copying: 92/1024 [MB] (46 MBps) [2024-11-27T04:45:36.871Z] Copying: 141/1024 [MB] (48 MBps) [2024-11-27T04:45:37.823Z] Copying: 187/1024 [MB] (46 MBps) [2024-11-27T04:45:38.757Z] Copying: 237/1024 [MB] (49 MBps) [2024-11-27T04:45:39.691Z] Copying: 284/1024 [MB] (46 MBps) [2024-11-27T04:45:41.071Z] Copying: 329/1024 [MB] (44 MBps) [2024-11-27T04:45:42.016Z] Copying: 374/1024 [MB] (45 MBps) [2024-11-27T04:45:42.950Z] Copying: 419/1024 [MB] (45 MBps) [2024-11-27T04:45:43.884Z] Copying: 465/1024 [MB] (45 MBps) [2024-11-27T04:45:44.824Z] Copying: 511/1024 [MB] (45 MBps) [2024-11-27T04:45:45.829Z] Copying: 558/1024 [MB] (47 MBps) [2024-11-27T04:45:46.764Z] Copying: 608/1024 [MB] (49 MBps) [2024-11-27T04:45:47.698Z] Copying: 662/1024 [MB] (53 MBps) [2024-11-27T04:45:48.716Z] Copying: 713/1024 [MB] (50 MBps) [2024-11-27T04:45:50.092Z] Copying: 759/1024 [MB] (46 MBps) [2024-11-27T04:45:51.030Z] Copying: 802/1024 [MB] (43 MBps) [2024-11-27T04:45:51.972Z] Copying: 849/1024 [MB] (46 MBps) [2024-11-27T04:45:52.914Z] Copying: 894/1024 [MB] (44 MBps) [2024-11-27T04:45:53.854Z] Copying: 937/1024 [MB] (43 MBps) [2024-11-27T04:45:54.797Z] Copying: 983/1024 [MB] (46 MBps) [2024-11-27T04:45:54.797Z] Copying: 1024/1024 [MB] (average 46 MBps)[2024-11-27 04:45:54.557883] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:58.210 [2024-11-27 04:45:54.557934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:28:58.210 [2024-11-27 04:45:54.557947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:28:58.210 [2024-11-27 04:45:54.557955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:58.210 [2024-11-27 04:45:54.557975] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:28:58.210 [2024-11-27 04:45:54.560546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:58.210 [2024-11-27 04:45:54.560575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:28:58.210 [2024-11-27 04:45:54.560591] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.557 ms 00:28:58.210 [2024-11-27 04:45:54.560599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:58.210 [2024-11-27 04:45:54.561876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:58.210 [2024-11-27 04:45:54.561904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:28:58.210 [2024-11-27 04:45:54.561913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.258 ms 00:28:58.210 [2024-11-27 04:45:54.561921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:58.210 [2024-11-27 04:45:54.561944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:58.210 [2024-11-27 04:45:54.561952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:28:58.210 [2024-11-27 04:45:54.561959] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:28:58.210 [2024-11-27 04:45:54.561966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:58.210 [2024-11-27 04:45:54.562013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:58.210 [2024-11-27 04:45:54.562021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:28:58.210 [2024-11-27 04:45:54.562029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:28:58.210 [2024-11-27 04:45:54.562035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:58.210 [2024-11-27 04:45:54.562047] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:28:58.210 [2024-11-27 04:45:54.562059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:28:58.210 [2024-11-27 04:45:54.562068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:28:58.210 [2024-11-27 04:45:54.562075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:28:58.210 [2024-11-27 04:45:54.562082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:28:58.210 [2024-11-27 04:45:54.562089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:28:58.210 [2024-11-27 04:45:54.562098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:28:58.210 [2024-11-27 04:45:54.562105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:28:58.210 [2024-11-27 04:45:54.562112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:28:58.210 [2024-11-27 04:45:54.562119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:28:58.210 [2024-11-27 04:45:54.562126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:28:58.210 [2024-11-27 04:45:54.562134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:28:58.210 [2024-11-27 04:45:54.562141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:28:58.210 [2024-11-27 04:45:54.562149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:28:58.210 [2024-11-27 04:45:54.562156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:28:58.210 [2024-11-27 04:45:54.562163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:28:58.210 [2024-11-27 04:45:54.562170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:28:58.210 [2024-11-27 04:45:54.562177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:28:58.210 [2024-11-27 04:45:54.562184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:28:58.210 [2024-11-27 04:45:54.562191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:28:58.210 [2024-11-27 04:45:54.562198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:28:58.210 [2024-11-27 04:45:54.562206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:28:58.210 [2024-11-27 04:45:54.562214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:28:58.210 [2024-11-27 04:45:54.562221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:28:58.210 [2024-11-27 04:45:54.562228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:28:58.210 [2024-11-27 04:45:54.562235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:28:58.210 [2024-11-27 04:45:54.562242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:28:58.210 [2024-11-27 04:45:54.562249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:28:58.210 [2024-11-27 04:45:54.562257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:28:58.210 [2024-11-27 04:45:54.562264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:28:58.210 [2024-11-27 04:45:54.562279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:28:58.210 [2024-11-27 04:45:54.562286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:28:58.211 [2024-11-27 04:45:54.562293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:28:58.211 [2024-11-27 04:45:54.562301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:28:58.211 [2024-11-27 04:45:54.562308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:28:58.211 [2024-11-27 04:45:54.562315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:28:58.211 [2024-11-27 04:45:54.562322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:28:58.211 [2024-11-27 04:45:54.562330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:28:58.211 [2024-11-27 04:45:54.562337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:28:58.211 [2024-11-27 04:45:54.562344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:28:58.211 [2024-11-27 04:45:54.562351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:28:58.211 [2024-11-27 04:45:54.562358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:28:58.211 [2024-11-27 04:45:54.562365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:28:58.211 [2024-11-27 04:45:54.562372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:28:58.211 [2024-11-27 04:45:54.562380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:28:58.211 [2024-11-27 04:45:54.562387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:28:58.211 [2024-11-27 04:45:54.562394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:28:58.211 [2024-11-27 04:45:54.562401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:28:58.211 [2024-11-27 04:45:54.562408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:28:58.211 [2024-11-27 04:45:54.562415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:28:58.211 [2024-11-27 04:45:54.562423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:28:58.211 [2024-11-27 04:45:54.562430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:28:58.211 [2024-11-27 04:45:54.562438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:28:58.211 [2024-11-27 04:45:54.562445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:28:58.211 [2024-11-27 04:45:54.562452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:28:58.211 [2024-11-27 04:45:54.562459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:28:58.211 [2024-11-27 04:45:54.562466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:28:58.211 [2024-11-27 04:45:54.562473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:28:58.211 [2024-11-27 04:45:54.562480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:28:58.211 [2024-11-27 04:45:54.562487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:28:58.211 [2024-11-27 04:45:54.562496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:28:58.211 [2024-11-27 04:45:54.562503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:28:58.211 [2024-11-27 04:45:54.562511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:28:58.211 [2024-11-27 04:45:54.562518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:28:58.211 [2024-11-27 04:45:54.562526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:28:58.211 [2024-11-27 04:45:54.562533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:28:58.211 [2024-11-27 04:45:54.562540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:28:58.211 [2024-11-27 04:45:54.562547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:28:58.211 [2024-11-27 04:45:54.562554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:28:58.211 [2024-11-27 04:45:54.562561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:28:58.211 [2024-11-27 04:45:54.562568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:28:58.211 [2024-11-27 04:45:54.562575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:28:58.211 [2024-11-27 04:45:54.562582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:28:58.211 [2024-11-27 04:45:54.562589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:28:58.211 [2024-11-27 04:45:54.562596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:28:58.211 [2024-11-27 04:45:54.562604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:28:58.211 [2024-11-27 04:45:54.562611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:28:58.211 [2024-11-27 04:45:54.562618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:28:58.211 [2024-11-27 04:45:54.562625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:28:58.211 [2024-11-27 04:45:54.562632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:28:58.211 [2024-11-27 04:45:54.562639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:28:58.211 [2024-11-27 04:45:54.562647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:28:58.211 [2024-11-27 04:45:54.562654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:28:58.211 [2024-11-27 04:45:54.562660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:28:58.211 [2024-11-27 04:45:54.562667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:28:58.211 [2024-11-27 04:45:54.562674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:28:58.211 [2024-11-27 04:45:54.562682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:28:58.211 [2024-11-27 04:45:54.562688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:28:58.211 [2024-11-27 04:45:54.562695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:28:58.211 [2024-11-27 04:45:54.562702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:28:58.211 [2024-11-27 04:45:54.562709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:28:58.211 [2024-11-27 04:45:54.562716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:28:58.211 [2024-11-27 04:45:54.562734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:28:58.211 [2024-11-27 04:45:54.562742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:28:58.211 [2024-11-27 04:45:54.562749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:28:58.211 [2024-11-27 04:45:54.562757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:28:58.211 [2024-11-27 04:45:54.562764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:28:58.211 [2024-11-27 04:45:54.562772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:28:58.211 [2024-11-27 04:45:54.562779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:28:58.211 [2024-11-27 04:45:54.562786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:28:58.211 [2024-11-27 04:45:54.562794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:28:58.211 [2024-11-27 04:45:54.562808] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:28:58.211 [2024-11-27 04:45:54.562816] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 77665fbb-ce7f-48bd-bba3-969864502bb5 00:28:58.211 [2024-11-27 04:45:54.562823] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:28:58.211 [2024-11-27 04:45:54.562838] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 32 00:28:58.211 [2024-11-27 04:45:54.562845] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:28:58.211 [2024-11-27 04:45:54.562858] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:28:58.211 [2024-11-27 04:45:54.562865] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:28:58.211 [2024-11-27 04:45:54.562872] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:28:58.211 [2024-11-27 04:45:54.562879] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:28:58.211 [2024-11-27 04:45:54.562885] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:28:58.211 [2024-11-27 04:45:54.562891] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:28:58.211 [2024-11-27 04:45:54.562897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:58.211 [2024-11-27 04:45:54.562904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:28:58.211 [2024-11-27 04:45:54.562912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.850 ms 00:28:58.211 [2024-11-27 04:45:54.562918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:58.211 [2024-11-27 04:45:54.575248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:58.211 [2024-11-27 04:45:54.575281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:28:58.211 [2024-11-27 04:45:54.575291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.316 ms 00:28:58.211 [2024-11-27 04:45:54.575298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:58.211 [2024-11-27 04:45:54.575631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:58.211 [2024-11-27 04:45:54.575639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:28:58.212 [2024-11-27 04:45:54.575647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.317 ms 00:28:58.212 [2024-11-27 04:45:54.575653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:58.212 [2024-11-27 04:45:54.608060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:58.212 [2024-11-27 04:45:54.608091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:58.212 [2024-11-27 04:45:54.608101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:58.212 [2024-11-27 04:45:54.608108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:58.212 [2024-11-27 04:45:54.608162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:58.212 [2024-11-27 04:45:54.608170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:58.212 [2024-11-27 04:45:54.608177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:58.212 [2024-11-27 04:45:54.608185] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:58.212 [2024-11-27 04:45:54.608223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:58.212 [2024-11-27 04:45:54.608234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:58.212 [2024-11-27 04:45:54.608242] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:58.212 [2024-11-27 04:45:54.608249] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:58.212 [2024-11-27 04:45:54.608262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:58.212 [2024-11-27 04:45:54.608270] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:58.212 [2024-11-27 04:45:54.608280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:58.212 [2024-11-27 04:45:54.608287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:58.212 [2024-11-27 04:45:54.685166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:58.212 [2024-11-27 04:45:54.685211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:58.212 [2024-11-27 04:45:54.685223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:58.212 [2024-11-27 04:45:54.685231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:58.212 [2024-11-27 04:45:54.748187] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:58.212 [2024-11-27 04:45:54.748231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:58.212 [2024-11-27 04:45:54.748242] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:58.212 [2024-11-27 04:45:54.748251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:58.212 [2024-11-27 04:45:54.748317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:58.212 [2024-11-27 04:45:54.748326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:58.212 [2024-11-27 04:45:54.748338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:58.212 [2024-11-27 04:45:54.748345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:58.212 [2024-11-27 04:45:54.748380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:58.212 [2024-11-27 04:45:54.748388] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:58.212 [2024-11-27 04:45:54.748397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:58.212 [2024-11-27 04:45:54.748404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:58.212 [2024-11-27 04:45:54.748469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:58.212 [2024-11-27 04:45:54.748478] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:58.212 [2024-11-27 04:45:54.748492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:58.212 [2024-11-27 04:45:54.748501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:58.212 [2024-11-27 04:45:54.748523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:58.212 [2024-11-27 04:45:54.748531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:28:58.212 [2024-11-27 04:45:54.748539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:58.212 [2024-11-27 04:45:54.748546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:58.212 [2024-11-27 04:45:54.748576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:58.212 [2024-11-27 04:45:54.748584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:58.212 [2024-11-27 04:45:54.748591] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:58.212 [2024-11-27 04:45:54.748601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:58.212 [2024-11-27 04:45:54.748636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:58.212 [2024-11-27 04:45:54.748645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:58.212 [2024-11-27 04:45:54.748653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:58.212 [2024-11-27 04:45:54.748660] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:58.212 [2024-11-27 04:45:54.748777] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 190.853 ms, result 0 00:28:59.156 00:28:59.156 00:28:59.156 04:45:55 ftl.ftl_restore_fast -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:28:59.156 [2024-11-27 04:45:55.529960] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:28:59.156 [2024-11-27 04:45:55.530236] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81391 ] 00:28:59.156 [2024-11-27 04:45:55.698715] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:59.414 [2024-11-27 04:45:55.796015] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:59.676 [2024-11-27 04:45:56.052782] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:59.676 [2024-11-27 04:45:56.052843] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:59.676 [2024-11-27 04:45:56.205567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:59.676 [2024-11-27 04:45:56.205618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:28:59.676 [2024-11-27 04:45:56.205631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:28:59.676 [2024-11-27 04:45:56.205639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:59.676 [2024-11-27 04:45:56.205682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:59.676 [2024-11-27 04:45:56.205695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:59.676 [2024-11-27 04:45:56.205703] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:28:59.676 [2024-11-27 04:45:56.205710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:59.676 [2024-11-27 04:45:56.205741] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:28:59.676 [2024-11-27 04:45:56.206449] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:28:59.676 [2024-11-27 04:45:56.206463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:59.676 [2024-11-27 04:45:56.206471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:59.676 [2024-11-27 04:45:56.206480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.727 ms 00:28:59.676 [2024-11-27 04:45:56.206487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:59.676 [2024-11-27 04:45:56.206717] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:28:59.676 [2024-11-27 04:45:56.206751] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:59.676 [2024-11-27 04:45:56.206761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:28:59.676 [2024-11-27 04:45:56.206769] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:28:59.676 [2024-11-27 04:45:56.206776] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:59.676 [2024-11-27 04:45:56.206840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:59.676 [2024-11-27 04:45:56.206850] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:28:59.676 [2024-11-27 04:45:56.206858] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:28:59.676 [2024-11-27 04:45:56.206865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:59.676 [2024-11-27 04:45:56.207114] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:59.676 [2024-11-27 04:45:56.207124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:59.676 [2024-11-27 04:45:56.207132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.218 ms 00:28:59.676 [2024-11-27 04:45:56.207139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:59.676 [2024-11-27 04:45:56.207198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:59.676 [2024-11-27 04:45:56.207207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:59.676 [2024-11-27 04:45:56.207214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:28:59.676 [2024-11-27 04:45:56.207221] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:59.676 [2024-11-27 04:45:56.207241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:59.676 [2024-11-27 04:45:56.207248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:28:59.676 [2024-11-27 04:45:56.207257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:28:59.676 [2024-11-27 04:45:56.207264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:59.676 [2024-11-27 04:45:56.207280] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:28:59.676 [2024-11-27 04:45:56.210741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:59.676 [2024-11-27 04:45:56.210769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:59.676 [2024-11-27 04:45:56.210778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.465 ms 00:28:59.676 [2024-11-27 04:45:56.210785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:59.676 [2024-11-27 04:45:56.210816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:59.676 [2024-11-27 04:45:56.210824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:28:59.676 [2024-11-27 04:45:56.210831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:28:59.676 [2024-11-27 04:45:56.210838] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:59.676 [2024-11-27 04:45:56.210874] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:28:59.676 [2024-11-27 04:45:56.210894] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:28:59.676 [2024-11-27 04:45:56.210928] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:28:59.676 [2024-11-27 04:45:56.210942] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:28:59.676 [2024-11-27 04:45:56.211043] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:28:59.676 [2024-11-27 04:45:56.211052] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:28:59.676 [2024-11-27 04:45:56.211063] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:28:59.676 [2024-11-27 04:45:56.211072] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:28:59.676 [2024-11-27 04:45:56.211081] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:28:59.676 [2024-11-27 04:45:56.211090] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:28:59.676 [2024-11-27 04:45:56.211098] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:28:59.676 [2024-11-27 04:45:56.211105] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:28:59.676 [2024-11-27 04:45:56.211112] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:28:59.676 [2024-11-27 04:45:56.211120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:59.676 [2024-11-27 04:45:56.211126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:28:59.676 [2024-11-27 04:45:56.211134] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.247 ms 00:28:59.676 [2024-11-27 04:45:56.211141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:59.676 [2024-11-27 04:45:56.211222] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:59.676 [2024-11-27 04:45:56.211235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:28:59.676 [2024-11-27 04:45:56.211242] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:28:59.676 [2024-11-27 04:45:56.211251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:59.676 [2024-11-27 04:45:56.211349] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:28:59.676 [2024-11-27 04:45:56.211358] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:28:59.676 [2024-11-27 04:45:56.211366] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:59.676 [2024-11-27 04:45:56.211374] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:59.676 [2024-11-27 04:45:56.211381] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:28:59.676 [2024-11-27 04:45:56.211387] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:28:59.676 [2024-11-27 04:45:56.211394] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:28:59.676 [2024-11-27 04:45:56.211400] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:28:59.676 [2024-11-27 04:45:56.211407] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:28:59.676 [2024-11-27 04:45:56.211413] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:59.676 [2024-11-27 04:45:56.211419] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:28:59.677 [2024-11-27 04:45:56.211425] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:28:59.677 [2024-11-27 04:45:56.211432] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:59.677 [2024-11-27 04:45:56.211439] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:28:59.677 [2024-11-27 04:45:56.211446] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:28:59.677 [2024-11-27 04:45:56.211457] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:59.677 [2024-11-27 04:45:56.211464] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:28:59.677 [2024-11-27 04:45:56.211471] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:28:59.677 [2024-11-27 04:45:56.211477] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:59.677 [2024-11-27 04:45:56.211483] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:28:59.677 [2024-11-27 04:45:56.211490] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:28:59.677 [2024-11-27 04:45:56.211496] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:59.677 [2024-11-27 04:45:56.211502] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:28:59.677 [2024-11-27 04:45:56.211509] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:28:59.677 [2024-11-27 04:45:56.211515] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:59.677 [2024-11-27 04:45:56.211521] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:28:59.677 [2024-11-27 04:45:56.211528] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:28:59.677 [2024-11-27 04:45:56.211534] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:59.677 [2024-11-27 04:45:56.211540] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:28:59.677 [2024-11-27 04:45:56.211547] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:28:59.677 [2024-11-27 04:45:56.211553] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:59.677 [2024-11-27 04:45:56.211559] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:28:59.677 [2024-11-27 04:45:56.211566] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:28:59.677 [2024-11-27 04:45:56.211573] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:59.677 [2024-11-27 04:45:56.211579] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:28:59.677 [2024-11-27 04:45:56.211585] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:28:59.677 [2024-11-27 04:45:56.211591] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:59.677 [2024-11-27 04:45:56.211597] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:28:59.677 [2024-11-27 04:45:56.211604] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:28:59.677 [2024-11-27 04:45:56.211611] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:59.677 [2024-11-27 04:45:56.211617] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:28:59.677 [2024-11-27 04:45:56.211623] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:28:59.677 [2024-11-27 04:45:56.211629] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:59.677 [2024-11-27 04:45:56.211635] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:28:59.677 [2024-11-27 04:45:56.211642] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:28:59.677 [2024-11-27 04:45:56.211648] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:59.677 [2024-11-27 04:45:56.211656] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:59.677 [2024-11-27 04:45:56.211665] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:28:59.677 [2024-11-27 04:45:56.211671] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:28:59.677 [2024-11-27 04:45:56.211678] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:28:59.677 [2024-11-27 04:45:56.211684] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:28:59.677 [2024-11-27 04:45:56.211690] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:28:59.677 [2024-11-27 04:45:56.211697] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:28:59.677 [2024-11-27 04:45:56.211704] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:28:59.677 [2024-11-27 04:45:56.211713] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:59.677 [2024-11-27 04:45:56.211741] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:28:59.677 [2024-11-27 04:45:56.211749] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:28:59.677 [2024-11-27 04:45:56.211757] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:28:59.677 [2024-11-27 04:45:56.211763] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:28:59.677 [2024-11-27 04:45:56.211770] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:28:59.677 [2024-11-27 04:45:56.211777] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:28:59.677 [2024-11-27 04:45:56.211784] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:28:59.677 [2024-11-27 04:45:56.211791] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:28:59.677 [2024-11-27 04:45:56.211798] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:28:59.677 [2024-11-27 04:45:56.211807] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:28:59.677 [2024-11-27 04:45:56.211814] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:28:59.677 [2024-11-27 04:45:56.211821] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:28:59.677 [2024-11-27 04:45:56.211828] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:28:59.677 [2024-11-27 04:45:56.211835] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:28:59.677 [2024-11-27 04:45:56.211842] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:28:59.677 [2024-11-27 04:45:56.211850] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:59.677 [2024-11-27 04:45:56.211857] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:28:59.677 [2024-11-27 04:45:56.211865] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:28:59.677 [2024-11-27 04:45:56.211872] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:28:59.677 [2024-11-27 04:45:56.211879] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:28:59.677 [2024-11-27 04:45:56.211886] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:59.677 [2024-11-27 04:45:56.211893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:28:59.677 [2024-11-27 04:45:56.211901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.605 ms 00:28:59.677 [2024-11-27 04:45:56.211911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:59.677 [2024-11-27 04:45:56.235247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:59.677 [2024-11-27 04:45:56.235370] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:59.677 [2024-11-27 04:45:56.235424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.295 ms 00:28:59.677 [2024-11-27 04:45:56.235448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:59.677 [2024-11-27 04:45:56.235544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:59.677 [2024-11-27 04:45:56.235566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:28:59.677 [2024-11-27 04:45:56.235589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:28:59.677 [2024-11-27 04:45:56.235607] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:59.938 [2024-11-27 04:45:56.276862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:59.938 [2024-11-27 04:45:56.277000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:59.938 [2024-11-27 04:45:56.277195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.187 ms 00:28:59.938 [2024-11-27 04:45:56.277222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:59.938 [2024-11-27 04:45:56.277279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:59.938 [2024-11-27 04:45:56.277431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:59.938 [2024-11-27 04:45:56.277452] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:28:59.938 [2024-11-27 04:45:56.277513] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:59.938 [2024-11-27 04:45:56.277627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:59.938 [2024-11-27 04:45:56.278019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:59.938 [2024-11-27 04:45:56.278104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:28:59.938 [2024-11-27 04:45:56.278131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:59.938 [2024-11-27 04:45:56.278314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:59.938 [2024-11-27 04:45:56.278381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:59.938 [2024-11-27 04:45:56.278424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.101 ms 00:28:59.938 [2024-11-27 04:45:56.278477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:59.939 [2024-11-27 04:45:56.291414] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:59.939 [2024-11-27 04:45:56.291516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:59.939 [2024-11-27 04:45:56.291567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.901 ms 00:28:59.939 [2024-11-27 04:45:56.291589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:59.939 [2024-11-27 04:45:56.292046] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:28:59.939 [2024-11-27 04:45:56.292191] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:28:59.939 [2024-11-27 04:45:56.292255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:59.939 [2024-11-27 04:45:56.292283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:28:59.939 [2024-11-27 04:45:56.292304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.231 ms 00:28:59.939 [2024-11-27 04:45:56.292369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:59.939 [2024-11-27 04:45:56.304611] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:59.939 [2024-11-27 04:45:56.304700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:28:59.939 [2024-11-27 04:45:56.304783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.210 ms 00:28:59.939 [2024-11-27 04:45:56.304825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:59.939 [2024-11-27 04:45:56.304946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:59.939 [2024-11-27 04:45:56.305006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:28:59.939 [2024-11-27 04:45:56.305136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.087 ms 00:28:59.939 [2024-11-27 04:45:56.305150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:59.939 [2024-11-27 04:45:56.305214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:59.939 [2024-11-27 04:45:56.305225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:28:59.939 [2024-11-27 04:45:56.305238] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.001 ms 00:28:59.939 [2024-11-27 04:45:56.305246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:59.939 [2024-11-27 04:45:56.305818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:59.939 [2024-11-27 04:45:56.305836] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:28:59.939 [2024-11-27 04:45:56.305856] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.537 ms 00:28:59.939 [2024-11-27 04:45:56.305864] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:59.939 [2024-11-27 04:45:56.305882] mngt/ftl_mngt_p2l.c: 169:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:28:59.939 [2024-11-27 04:45:56.305891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:59.939 [2024-11-27 04:45:56.305898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:28:59.939 [2024-11-27 04:45:56.305905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:28:59.939 [2024-11-27 04:45:56.305912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:59.939 [2024-11-27 04:45:56.316708] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:28:59.939 [2024-11-27 04:45:56.316923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:59.939 [2024-11-27 04:45:56.316936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:28:59.939 [2024-11-27 04:45:56.316946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.994 ms 00:28:59.939 [2024-11-27 04:45:56.316953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:59.939 [2024-11-27 04:45:56.319092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:59.939 [2024-11-27 04:45:56.319115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:28:59.939 [2024-11-27 04:45:56.319124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.121 ms 00:28:59.939 [2024-11-27 04:45:56.319131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:59.939 [2024-11-27 04:45:56.319207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:59.939 [2024-11-27 04:45:56.319218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:28:59.939 [2024-11-27 04:45:56.319226] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:28:59.939 [2024-11-27 04:45:56.319233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:59.939 [2024-11-27 04:45:56.319253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:59.939 [2024-11-27 04:45:56.319265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:28:59.939 [2024-11-27 04:45:56.319273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:28:59.939 [2024-11-27 04:45:56.319279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:59.939 [2024-11-27 04:45:56.319303] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:28:59.939 [2024-11-27 04:45:56.319313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:59.939 [2024-11-27 04:45:56.319320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:28:59.939 [2024-11-27 04:45:56.319328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:28:59.939 [2024-11-27 04:45:56.319335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:59.939 [2024-11-27 04:45:56.342977] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:59.939 [2024-11-27 04:45:56.343084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:28:59.939 [2024-11-27 04:45:56.343099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.623 ms 00:28:59.939 [2024-11-27 04:45:56.343106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:59.939 [2024-11-27 04:45:56.343167] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:59.939 [2024-11-27 04:45:56.343176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:28:59.939 [2024-11-27 04:45:56.343184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:28:59.939 [2024-11-27 04:45:56.343192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:59.939 [2024-11-27 04:45:56.344020] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 138.046 ms, result 0 00:29:01.321  [2024-11-27T04:45:58.844Z] Copying: 50/1024 [MB] (50 MBps) [2024-11-27T04:45:59.785Z] Copying: 95/1024 [MB] (44 MBps) [2024-11-27T04:46:00.729Z] Copying: 136/1024 [MB] (40 MBps) [2024-11-27T04:46:01.715Z] Copying: 166/1024 [MB] (30 MBps) [2024-11-27T04:46:02.656Z] Copying: 197/1024 [MB] (30 MBps) [2024-11-27T04:46:03.597Z] Copying: 227/1024 [MB] (30 MBps) [2024-11-27T04:46:04.538Z] Copying: 246/1024 [MB] (18 MBps) [2024-11-27T04:46:05.926Z] Copying: 257/1024 [MB] (11 MBps) [2024-11-27T04:46:06.871Z] Copying: 267/1024 [MB] (10 MBps) [2024-11-27T04:46:07.812Z] Copying: 278/1024 [MB] (10 MBps) [2024-11-27T04:46:08.756Z] Copying: 288/1024 [MB] (10 MBps) [2024-11-27T04:46:09.698Z] Copying: 299/1024 [MB] (10 MBps) [2024-11-27T04:46:10.640Z] Copying: 316244/1048576 [kB] (9924 kBps) [2024-11-27T04:46:11.582Z] Copying: 326244/1048576 [kB] (10000 kBps) [2024-11-27T04:46:12.524Z] Copying: 336296/1048576 [kB] (10052 kBps) [2024-11-27T04:46:13.909Z] Copying: 339/1024 [MB] (10 MBps) [2024-11-27T04:46:14.868Z] Copying: 357364/1048576 [kB] (9944 kBps) [2024-11-27T04:46:15.812Z] Copying: 367396/1048576 [kB] (10032 kBps) [2024-11-27T04:46:16.754Z] Copying: 376816/1048576 [kB] (9420 kBps) [2024-11-27T04:46:17.695Z] Copying: 378/1024 [MB] (10 MBps) [2024-11-27T04:46:18.648Z] Copying: 389/1024 [MB] (10 MBps) [2024-11-27T04:46:19.592Z] Copying: 408264/1048576 [kB] (9132 kBps) [2024-11-27T04:46:20.534Z] Copying: 417764/1048576 [kB] (9500 kBps) [2024-11-27T04:46:21.921Z] Copying: 427244/1048576 [kB] (9480 kBps) [2024-11-27T04:46:22.928Z] Copying: 427/1024 [MB] (10 MBps) [2024-11-27T04:46:23.521Z] Copying: 437/1024 [MB] (10 MBps) [2024-11-27T04:46:24.907Z] Copying: 448/1024 [MB] (10 MBps) [2024-11-27T04:46:25.852Z] Copying: 468956/1048576 [kB] (9776 kBps) [2024-11-27T04:46:26.798Z] Copying: 478928/1048576 [kB] (9972 kBps) [2024-11-27T04:46:27.743Z] Copying: 479/1024 [MB] (11 MBps) [2024-11-27T04:46:28.688Z] Copying: 500664/1048576 [kB] (10008 kBps) [2024-11-27T04:46:29.648Z] Copying: 510408/1048576 [kB] (9744 kBps) [2024-11-27T04:46:30.593Z] Copying: 520180/1048576 [kB] (9772 kBps) [2024-11-27T04:46:31.565Z] Copying: 529912/1048576 [kB] (9732 kBps) [2024-11-27T04:46:32.954Z] Copying: 538884/1048576 [kB] (8972 kBps) [2024-11-27T04:46:33.526Z] Copying: 547952/1048576 [kB] (9068 kBps) [2024-11-27T04:46:34.913Z] Copying: 557276/1048576 [kB] (9324 kBps) [2024-11-27T04:46:35.856Z] Copying: 566660/1048576 [kB] (9384 kBps) [2024-11-27T04:46:36.799Z] Copying: 563/1024 [MB] (10 MBps) [2024-11-27T04:46:37.742Z] Copying: 573/1024 [MB] (10 MBps) [2024-11-27T04:46:38.685Z] Copying: 596796/1048576 [kB] (9312 kBps) [2024-11-27T04:46:39.706Z] Copying: 605924/1048576 [kB] (9128 kBps) [2024-11-27T04:46:40.648Z] Copying: 624/1024 [MB] (33 MBps) [2024-11-27T04:46:41.590Z] Copying: 671/1024 [MB] (46 MBps) [2024-11-27T04:46:42.535Z] Copying: 717/1024 [MB] (45 MBps) [2024-11-27T04:46:44.073Z] Copying: 766/1024 [MB] (49 MBps) [2024-11-27T04:46:44.646Z] Copying: 815/1024 [MB] (48 MBps) [2024-11-27T04:46:45.591Z] Copying: 848/1024 [MB] (33 MBps) [2024-11-27T04:46:46.535Z] Copying: 887/1024 [MB] (39 MBps) [2024-11-27T04:46:47.922Z] Copying: 931/1024 [MB] (43 MBps) [2024-11-27T04:46:48.549Z] Copying: 979/1024 [MB] (48 MBps) [2024-11-27T04:46:48.812Z] Copying: 1024/1024 [MB] (average 19 MBps)[2024-11-27 04:46:48.707262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:52.225 [2024-11-27 04:46:48.707323] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:29:52.225 [2024-11-27 04:46:48.707336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:29:52.225 [2024-11-27 04:46:48.707344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:52.225 [2024-11-27 04:46:48.707375] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:29:52.225 [2024-11-27 04:46:48.711074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:52.225 [2024-11-27 04:46:48.711114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:29:52.225 [2024-11-27 04:46:48.711127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.684 ms 00:29:52.225 [2024-11-27 04:46:48.711139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:52.225 [2024-11-27 04:46:48.711413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:52.225 [2024-11-27 04:46:48.711432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:29:52.225 [2024-11-27 04:46:48.711442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.247 ms 00:29:52.225 [2024-11-27 04:46:48.711451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:52.225 [2024-11-27 04:46:48.711486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:52.225 [2024-11-27 04:46:48.711498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:29:52.225 [2024-11-27 04:46:48.711508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:29:52.225 [2024-11-27 04:46:48.711517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:52.225 [2024-11-27 04:46:48.711576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:52.225 [2024-11-27 04:46:48.711593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:29:52.225 [2024-11-27 04:46:48.711602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:29:52.225 [2024-11-27 04:46:48.711611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:52.225 [2024-11-27 04:46:48.711627] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:29:52.225 [2024-11-27 04:46:48.711642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:29:52.225 [2024-11-27 04:46:48.711656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:29:52.225 [2024-11-27 04:46:48.711666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:29:52.225 [2024-11-27 04:46:48.711676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:29:52.225 [2024-11-27 04:46:48.711685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:29:52.225 [2024-11-27 04:46:48.711695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:29:52.225 [2024-11-27 04:46:48.711704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:29:52.225 [2024-11-27 04:46:48.711713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:29:52.225 [2024-11-27 04:46:48.711733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:29:52.225 [2024-11-27 04:46:48.711745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:29:52.225 [2024-11-27 04:46:48.711755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:29:52.225 [2024-11-27 04:46:48.711764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:29:52.225 [2024-11-27 04:46:48.711774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:29:52.225 [2024-11-27 04:46:48.711783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:29:52.225 [2024-11-27 04:46:48.711794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:29:52.225 [2024-11-27 04:46:48.711803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:29:52.225 [2024-11-27 04:46:48.711813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:29:52.225 [2024-11-27 04:46:48.711823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:29:52.225 [2024-11-27 04:46:48.711832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:29:52.225 [2024-11-27 04:46:48.711841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:29:52.225 [2024-11-27 04:46:48.711850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:29:52.225 [2024-11-27 04:46:48.711860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:29:52.225 [2024-11-27 04:46:48.711869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:29:52.225 [2024-11-27 04:46:48.711878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:29:52.225 [2024-11-27 04:46:48.711888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:29:52.225 [2024-11-27 04:46:48.711898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:29:52.225 [2024-11-27 04:46:48.711911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:29:52.225 [2024-11-27 04:46:48.711921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:29:52.225 [2024-11-27 04:46:48.711930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:29:52.225 [2024-11-27 04:46:48.711940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:29:52.225 [2024-11-27 04:46:48.711950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:29:52.225 [2024-11-27 04:46:48.711959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:29:52.225 [2024-11-27 04:46:48.711968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:29:52.225 [2024-11-27 04:46:48.711978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:29:52.225 [2024-11-27 04:46:48.711987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:29:52.225 [2024-11-27 04:46:48.711996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:29:52.225 [2024-11-27 04:46:48.712005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:29:52.225 [2024-11-27 04:46:48.712015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:29:52.225 [2024-11-27 04:46:48.712024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:29:52.225 [2024-11-27 04:46:48.712033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:29:52.225 [2024-11-27 04:46:48.712043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:29:52.225 [2024-11-27 04:46:48.712052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:29:52.225 [2024-11-27 04:46:48.712061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:29:52.225 [2024-11-27 04:46:48.712071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:29:52.225 [2024-11-27 04:46:48.712080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:29:52.225 [2024-11-27 04:46:48.712096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:29:52.225 [2024-11-27 04:46:48.712106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:29:52.225 [2024-11-27 04:46:48.712115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:29:52.225 [2024-11-27 04:46:48.712125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:29:52.225 [2024-11-27 04:46:48.712134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:29:52.225 [2024-11-27 04:46:48.712143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:29:52.225 [2024-11-27 04:46:48.712153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:29:52.225 [2024-11-27 04:46:48.712162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:29:52.225 [2024-11-27 04:46:48.712172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:29:52.225 [2024-11-27 04:46:48.712181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:29:52.225 [2024-11-27 04:46:48.712191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:29:52.225 [2024-11-27 04:46:48.712200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:29:52.225 [2024-11-27 04:46:48.712209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:29:52.225 [2024-11-27 04:46:48.712220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:29:52.225 [2024-11-27 04:46:48.712230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:29:52.225 [2024-11-27 04:46:48.712239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:29:52.225 [2024-11-27 04:46:48.712248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:29:52.225 [2024-11-27 04:46:48.712258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:29:52.226 [2024-11-27 04:46:48.712267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:29:52.226 [2024-11-27 04:46:48.712276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:29:52.226 [2024-11-27 04:46:48.712286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:29:52.226 [2024-11-27 04:46:48.712295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:29:52.226 [2024-11-27 04:46:48.712304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:29:52.226 [2024-11-27 04:46:48.712313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:29:52.226 [2024-11-27 04:46:48.712323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:29:52.226 [2024-11-27 04:46:48.712332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:29:52.226 [2024-11-27 04:46:48.712341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:29:52.226 [2024-11-27 04:46:48.712351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:29:52.226 [2024-11-27 04:46:48.712361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:29:52.226 [2024-11-27 04:46:48.712370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:29:52.226 [2024-11-27 04:46:48.712379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:29:52.226 [2024-11-27 04:46:48.712389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:29:52.226 [2024-11-27 04:46:48.712398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:29:52.226 [2024-11-27 04:46:48.712408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:29:52.226 [2024-11-27 04:46:48.712417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:29:52.226 [2024-11-27 04:46:48.712426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:29:52.226 [2024-11-27 04:46:48.712436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:29:52.226 [2024-11-27 04:46:48.712445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:29:52.226 [2024-11-27 04:46:48.712454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:29:52.226 [2024-11-27 04:46:48.712463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:29:52.226 [2024-11-27 04:46:48.712472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:29:52.226 [2024-11-27 04:46:48.712482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:29:52.226 [2024-11-27 04:46:48.712491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:29:52.226 [2024-11-27 04:46:48.712501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:29:52.226 [2024-11-27 04:46:48.712510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:29:52.226 [2024-11-27 04:46:48.712521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:29:52.226 [2024-11-27 04:46:48.712531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:29:52.226 [2024-11-27 04:46:48.712541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:29:52.226 [2024-11-27 04:46:48.712550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:29:52.226 [2024-11-27 04:46:48.712560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:29:52.226 [2024-11-27 04:46:48.712569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:29:52.226 [2024-11-27 04:46:48.712579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:29:52.226 [2024-11-27 04:46:48.712588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:29:52.226 [2024-11-27 04:46:48.712597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:29:52.226 [2024-11-27 04:46:48.712606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:29:52.226 [2024-11-27 04:46:48.712625] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:29:52.226 [2024-11-27 04:46:48.712635] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 77665fbb-ce7f-48bd-bba3-969864502bb5 00:29:52.226 [2024-11-27 04:46:48.712646] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:29:52.226 [2024-11-27 04:46:48.712655] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 32 00:29:52.226 [2024-11-27 04:46:48.712664] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:29:52.226 [2024-11-27 04:46:48.712673] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:29:52.226 [2024-11-27 04:46:48.712681] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:29:52.226 [2024-11-27 04:46:48.712691] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:29:52.226 [2024-11-27 04:46:48.712699] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:29:52.226 [2024-11-27 04:46:48.712707] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:29:52.226 [2024-11-27 04:46:48.712730] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:29:52.226 [2024-11-27 04:46:48.712740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:52.226 [2024-11-27 04:46:48.712749] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:29:52.226 [2024-11-27 04:46:48.712759] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.113 ms 00:29:52.226 [2024-11-27 04:46:48.712770] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:52.226 [2024-11-27 04:46:48.727372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:52.226 [2024-11-27 04:46:48.727407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:29:52.226 [2024-11-27 04:46:48.727417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.585 ms 00:29:52.226 [2024-11-27 04:46:48.727425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:52.226 [2024-11-27 04:46:48.727773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:52.226 [2024-11-27 04:46:48.727791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:29:52.226 [2024-11-27 04:46:48.727804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.330 ms 00:29:52.226 [2024-11-27 04:46:48.727812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:52.226 [2024-11-27 04:46:48.760100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:52.226 [2024-11-27 04:46:48.760145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:52.226 [2024-11-27 04:46:48.760155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:52.226 [2024-11-27 04:46:48.760162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:52.226 [2024-11-27 04:46:48.760222] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:52.226 [2024-11-27 04:46:48.760230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:52.226 [2024-11-27 04:46:48.760241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:52.226 [2024-11-27 04:46:48.760248] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:52.226 [2024-11-27 04:46:48.760299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:52.226 [2024-11-27 04:46:48.760309] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:52.226 [2024-11-27 04:46:48.760316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:52.226 [2024-11-27 04:46:48.760323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:52.226 [2024-11-27 04:46:48.760338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:52.226 [2024-11-27 04:46:48.760345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:52.226 [2024-11-27 04:46:48.760353] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:52.226 [2024-11-27 04:46:48.760363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:52.488 [2024-11-27 04:46:48.836494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:52.488 [2024-11-27 04:46:48.836536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:52.488 [2024-11-27 04:46:48.836548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:52.488 [2024-11-27 04:46:48.836556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:52.488 [2024-11-27 04:46:48.898555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:52.488 [2024-11-27 04:46:48.898598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:52.488 [2024-11-27 04:46:48.898608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:52.488 [2024-11-27 04:46:48.898621] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:52.488 [2024-11-27 04:46:48.898687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:52.488 [2024-11-27 04:46:48.898697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:52.488 [2024-11-27 04:46:48.898705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:52.488 [2024-11-27 04:46:48.898711] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:52.488 [2024-11-27 04:46:48.898760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:52.488 [2024-11-27 04:46:48.898769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:52.488 [2024-11-27 04:46:48.898777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:52.488 [2024-11-27 04:46:48.898784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:52.488 [2024-11-27 04:46:48.898857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:52.488 [2024-11-27 04:46:48.898866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:52.488 [2024-11-27 04:46:48.898873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:52.488 [2024-11-27 04:46:48.898880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:52.488 [2024-11-27 04:46:48.898903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:52.488 [2024-11-27 04:46:48.898911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:29:52.488 [2024-11-27 04:46:48.898918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:52.488 [2024-11-27 04:46:48.898925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:52.488 [2024-11-27 04:46:48.898958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:52.488 [2024-11-27 04:46:48.898966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:52.488 [2024-11-27 04:46:48.898974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:52.488 [2024-11-27 04:46:48.898981] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:52.488 [2024-11-27 04:46:48.899017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:52.488 [2024-11-27 04:46:48.899031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:52.488 [2024-11-27 04:46:48.899039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:52.488 [2024-11-27 04:46:48.899046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:52.488 [2024-11-27 04:46:48.899152] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 191.865 ms, result 0 00:29:53.084 00:29:53.084 00:29:53.084 04:46:49 ftl.ftl_restore_fast -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:29:55.626 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:29:55.626 04:46:51 ftl.ftl_restore_fast -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:29:55.626 [2024-11-27 04:46:51.790584] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:29:55.626 [2024-11-27 04:46:51.790707] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81960 ] 00:29:55.626 [2024-11-27 04:46:51.950883] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:55.626 [2024-11-27 04:46:52.050771] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:55.910 [2024-11-27 04:46:52.312329] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:29:55.910 [2024-11-27 04:46:52.312396] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:29:55.910 [2024-11-27 04:46:52.468566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:55.910 [2024-11-27 04:46:52.468617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:29:55.910 [2024-11-27 04:46:52.468630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:29:55.910 [2024-11-27 04:46:52.468638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:55.910 [2024-11-27 04:46:52.468686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:55.910 [2024-11-27 04:46:52.468699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:55.911 [2024-11-27 04:46:52.468708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:29:55.911 [2024-11-27 04:46:52.468715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:55.911 [2024-11-27 04:46:52.468745] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:29:55.911 [2024-11-27 04:46:52.469454] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:29:55.911 [2024-11-27 04:46:52.469479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:55.911 [2024-11-27 04:46:52.469486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:55.911 [2024-11-27 04:46:52.469495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.738 ms 00:29:55.911 [2024-11-27 04:46:52.469503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:55.911 [2024-11-27 04:46:52.469854] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:29:55.911 [2024-11-27 04:46:52.469891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:55.911 [2024-11-27 04:46:52.469902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:29:55.911 [2024-11-27 04:46:52.469910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:29:55.911 [2024-11-27 04:46:52.469918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:55.911 [2024-11-27 04:46:52.469958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:55.911 [2024-11-27 04:46:52.469967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:29:55.911 [2024-11-27 04:46:52.469974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:29:55.911 [2024-11-27 04:46:52.469981] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:55.911 [2024-11-27 04:46:52.470233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:55.911 [2024-11-27 04:46:52.470250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:55.911 [2024-11-27 04:46:52.470258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.220 ms 00:29:55.911 [2024-11-27 04:46:52.470266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:55.911 [2024-11-27 04:46:52.470328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:55.911 [2024-11-27 04:46:52.470337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:55.911 [2024-11-27 04:46:52.470344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:29:55.911 [2024-11-27 04:46:52.470351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:55.911 [2024-11-27 04:46:52.470372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:55.911 [2024-11-27 04:46:52.470380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:29:55.911 [2024-11-27 04:46:52.470390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:29:55.911 [2024-11-27 04:46:52.470397] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:55.911 [2024-11-27 04:46:52.470414] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:29:55.911 [2024-11-27 04:46:52.474032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:55.911 [2024-11-27 04:46:52.474063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:55.911 [2024-11-27 04:46:52.474072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.622 ms 00:29:55.911 [2024-11-27 04:46:52.474079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:55.911 [2024-11-27 04:46:52.474110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:55.911 [2024-11-27 04:46:52.474119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:29:55.911 [2024-11-27 04:46:52.474126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:29:55.911 [2024-11-27 04:46:52.474133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:55.911 [2024-11-27 04:46:52.474176] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:29:55.911 [2024-11-27 04:46:52.474199] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:29:55.911 [2024-11-27 04:46:52.474235] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:29:55.911 [2024-11-27 04:46:52.474249] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:29:55.911 [2024-11-27 04:46:52.474350] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:29:55.911 [2024-11-27 04:46:52.474367] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:29:55.911 [2024-11-27 04:46:52.474377] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:29:55.911 [2024-11-27 04:46:52.474386] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:29:55.911 [2024-11-27 04:46:52.474395] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:29:55.911 [2024-11-27 04:46:52.474405] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:29:55.911 [2024-11-27 04:46:52.474413] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:29:55.911 [2024-11-27 04:46:52.474420] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:29:55.911 [2024-11-27 04:46:52.474427] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:29:55.911 [2024-11-27 04:46:52.474434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:55.911 [2024-11-27 04:46:52.474441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:29:55.911 [2024-11-27 04:46:52.474449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.260 ms 00:29:55.911 [2024-11-27 04:46:52.474456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:55.911 [2024-11-27 04:46:52.474537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:55.911 [2024-11-27 04:46:52.474550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:29:55.911 [2024-11-27 04:46:52.474558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:29:55.911 [2024-11-27 04:46:52.474568] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:55.911 [2024-11-27 04:46:52.474667] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:29:55.911 [2024-11-27 04:46:52.474682] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:29:55.911 [2024-11-27 04:46:52.474691] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:55.911 [2024-11-27 04:46:52.474699] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:55.911 [2024-11-27 04:46:52.474706] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:29:55.911 [2024-11-27 04:46:52.474713] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:29:55.911 [2024-11-27 04:46:52.474720] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:29:55.911 [2024-11-27 04:46:52.474737] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:29:55.911 [2024-11-27 04:46:52.474744] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:29:55.911 [2024-11-27 04:46:52.474751] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:55.911 [2024-11-27 04:46:52.474757] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:29:55.911 [2024-11-27 04:46:52.474765] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:29:55.911 [2024-11-27 04:46:52.474772] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:55.911 [2024-11-27 04:46:52.474781] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:29:55.911 [2024-11-27 04:46:52.474788] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:29:55.911 [2024-11-27 04:46:52.474799] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:55.911 [2024-11-27 04:46:52.474806] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:29:55.911 [2024-11-27 04:46:52.474813] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:29:55.911 [2024-11-27 04:46:52.474819] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:55.911 [2024-11-27 04:46:52.474826] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:29:55.911 [2024-11-27 04:46:52.474833] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:29:55.911 [2024-11-27 04:46:52.474840] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:55.911 [2024-11-27 04:46:52.474847] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:29:55.911 [2024-11-27 04:46:52.474854] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:29:55.911 [2024-11-27 04:46:52.474860] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:55.911 [2024-11-27 04:46:52.474867] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:29:55.911 [2024-11-27 04:46:52.474873] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:29:55.911 [2024-11-27 04:46:52.474880] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:55.911 [2024-11-27 04:46:52.474887] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:29:55.911 [2024-11-27 04:46:52.474894] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:29:55.911 [2024-11-27 04:46:52.474900] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:55.911 [2024-11-27 04:46:52.474907] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:29:55.911 [2024-11-27 04:46:52.474914] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:29:55.911 [2024-11-27 04:46:52.474921] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:55.911 [2024-11-27 04:46:52.474928] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:29:55.911 [2024-11-27 04:46:52.474934] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:29:55.911 [2024-11-27 04:46:52.474940] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:55.911 [2024-11-27 04:46:52.474947] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:29:55.911 [2024-11-27 04:46:52.474953] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:29:55.911 [2024-11-27 04:46:52.474960] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:55.911 [2024-11-27 04:46:52.474966] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:29:55.911 [2024-11-27 04:46:52.474973] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:29:55.911 [2024-11-27 04:46:52.474979] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:55.911 [2024-11-27 04:46:52.474985] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:29:55.911 [2024-11-27 04:46:52.474993] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:29:55.912 [2024-11-27 04:46:52.475000] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:55.912 [2024-11-27 04:46:52.475007] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:55.912 [2024-11-27 04:46:52.475016] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:29:55.912 [2024-11-27 04:46:52.475023] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:29:55.912 [2024-11-27 04:46:52.475029] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:29:55.912 [2024-11-27 04:46:52.475036] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:29:55.912 [2024-11-27 04:46:52.475043] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:29:55.912 [2024-11-27 04:46:52.475049] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:29:55.912 [2024-11-27 04:46:52.475057] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:29:55.912 [2024-11-27 04:46:52.475067] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:55.912 [2024-11-27 04:46:52.475075] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:29:55.912 [2024-11-27 04:46:52.475082] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:29:55.912 [2024-11-27 04:46:52.475089] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:29:55.912 [2024-11-27 04:46:52.475096] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:29:55.912 [2024-11-27 04:46:52.475103] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:29:55.912 [2024-11-27 04:46:52.475110] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:29:55.912 [2024-11-27 04:46:52.475118] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:29:55.912 [2024-11-27 04:46:52.475125] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:29:55.912 [2024-11-27 04:46:52.475132] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:29:55.912 [2024-11-27 04:46:52.475139] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:29:55.912 [2024-11-27 04:46:52.475146] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:29:55.912 [2024-11-27 04:46:52.475153] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:29:55.912 [2024-11-27 04:46:52.475160] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:29:55.912 [2024-11-27 04:46:52.475167] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:29:55.912 [2024-11-27 04:46:52.475174] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:29:55.912 [2024-11-27 04:46:52.475182] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:55.912 [2024-11-27 04:46:52.475189] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:29:55.912 [2024-11-27 04:46:52.475196] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:29:55.912 [2024-11-27 04:46:52.475203] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:29:55.912 [2024-11-27 04:46:52.475210] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:29:55.912 [2024-11-27 04:46:52.475217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:55.912 [2024-11-27 04:46:52.475224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:29:55.912 [2024-11-27 04:46:52.475233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.619 ms 00:29:55.912 [2024-11-27 04:46:52.475240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:56.174 [2024-11-27 04:46:52.498388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:56.174 [2024-11-27 04:46:52.498421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:56.174 [2024-11-27 04:46:52.498432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.110 ms 00:29:56.174 [2024-11-27 04:46:52.498439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:56.174 [2024-11-27 04:46:52.498517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:56.174 [2024-11-27 04:46:52.498525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:29:56.174 [2024-11-27 04:46:52.498535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:29:56.174 [2024-11-27 04:46:52.498542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:56.174 [2024-11-27 04:46:52.539940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:56.174 [2024-11-27 04:46:52.540073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:56.174 [2024-11-27 04:46:52.540090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.351 ms 00:29:56.174 [2024-11-27 04:46:52.540098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:56.174 [2024-11-27 04:46:52.540142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:56.174 [2024-11-27 04:46:52.540151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:56.174 [2024-11-27 04:46:52.540160] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:29:56.174 [2024-11-27 04:46:52.540167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:56.174 [2024-11-27 04:46:52.540261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:56.174 [2024-11-27 04:46:52.540271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:56.174 [2024-11-27 04:46:52.540279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:29:56.174 [2024-11-27 04:46:52.540286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:56.174 [2024-11-27 04:46:52.540397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:56.174 [2024-11-27 04:46:52.540408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:56.174 [2024-11-27 04:46:52.540416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.097 ms 00:29:56.174 [2024-11-27 04:46:52.540423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:56.174 [2024-11-27 04:46:52.553456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:56.174 [2024-11-27 04:46:52.553487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:56.174 [2024-11-27 04:46:52.553497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.016 ms 00:29:56.174 [2024-11-27 04:46:52.553505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:56.174 [2024-11-27 04:46:52.553609] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:29:56.174 [2024-11-27 04:46:52.553621] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:29:56.174 [2024-11-27 04:46:52.553630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:56.174 [2024-11-27 04:46:52.553641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:29:56.174 [2024-11-27 04:46:52.553648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:29:56.174 [2024-11-27 04:46:52.553655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:56.174 [2024-11-27 04:46:52.565901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:56.174 [2024-11-27 04:46:52.565929] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:29:56.174 [2024-11-27 04:46:52.565940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.231 ms 00:29:56.174 [2024-11-27 04:46:52.565948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:56.174 [2024-11-27 04:46:52.566053] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:56.174 [2024-11-27 04:46:52.566063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:29:56.174 [2024-11-27 04:46:52.566070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.086 ms 00:29:56.174 [2024-11-27 04:46:52.566081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:56.174 [2024-11-27 04:46:52.566139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:56.174 [2024-11-27 04:46:52.566149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:29:56.174 [2024-11-27 04:46:52.566163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.001 ms 00:29:56.174 [2024-11-27 04:46:52.566169] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:56.174 [2024-11-27 04:46:52.566716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:56.174 [2024-11-27 04:46:52.566753] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:29:56.174 [2024-11-27 04:46:52.566761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.512 ms 00:29:56.174 [2024-11-27 04:46:52.566768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:56.174 [2024-11-27 04:46:52.566796] mngt/ftl_mngt_p2l.c: 169:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:29:56.174 [2024-11-27 04:46:52.566806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:56.174 [2024-11-27 04:46:52.566814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:29:56.174 [2024-11-27 04:46:52.566823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:29:56.174 [2024-11-27 04:46:52.566830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:56.174 [2024-11-27 04:46:52.577636] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:29:56.174 [2024-11-27 04:46:52.577807] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:56.174 [2024-11-27 04:46:52.577817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:29:56.174 [2024-11-27 04:46:52.577827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.960 ms 00:29:56.174 [2024-11-27 04:46:52.577834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:56.174 [2024-11-27 04:46:52.579910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:56.174 [2024-11-27 04:46:52.579935] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:29:56.174 [2024-11-27 04:46:52.579945] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.058 ms 00:29:56.174 [2024-11-27 04:46:52.579953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:56.174 [2024-11-27 04:46:52.580025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:56.174 [2024-11-27 04:46:52.580036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:29:56.174 [2024-11-27 04:46:52.580045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:29:56.174 [2024-11-27 04:46:52.580054] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:56.174 [2024-11-27 04:46:52.580076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:56.174 [2024-11-27 04:46:52.580088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:29:56.174 [2024-11-27 04:46:52.580096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:29:56.174 [2024-11-27 04:46:52.580103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:56.174 [2024-11-27 04:46:52.580127] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:29:56.174 [2024-11-27 04:46:52.580136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:56.174 [2024-11-27 04:46:52.580143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:29:56.174 [2024-11-27 04:46:52.580151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:29:56.174 [2024-11-27 04:46:52.580158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:56.174 [2024-11-27 04:46:52.603543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:56.174 [2024-11-27 04:46:52.603579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:29:56.174 [2024-11-27 04:46:52.603592] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.366 ms 00:29:56.174 [2024-11-27 04:46:52.603601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:56.174 [2024-11-27 04:46:52.603670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:56.174 [2024-11-27 04:46:52.603679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:29:56.174 [2024-11-27 04:46:52.603688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:29:56.174 [2024-11-27 04:46:52.603695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:56.174 [2024-11-27 04:46:52.604561] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 135.603 ms, result 0 00:29:57.118  [2024-11-27T04:46:54.647Z] Copying: 41/1024 [MB] (41 MBps) [2024-11-27T04:46:56.033Z] Copying: 70/1024 [MB] (28 MBps) [2024-11-27T04:46:57.043Z] Copying: 116/1024 [MB] (46 MBps) [2024-11-27T04:46:57.647Z] Copying: 161/1024 [MB] (45 MBps) [2024-11-27T04:46:59.032Z] Copying: 189/1024 [MB] (27 MBps) [2024-11-27T04:46:59.980Z] Copying: 223/1024 [MB] (34 MBps) [2024-11-27T04:47:00.917Z] Copying: 250/1024 [MB] (26 MBps) [2024-11-27T04:47:01.906Z] Copying: 273/1024 [MB] (23 MBps) [2024-11-27T04:47:02.842Z] Copying: 317/1024 [MB] (44 MBps) [2024-11-27T04:47:03.778Z] Copying: 360/1024 [MB] (43 MBps) [2024-11-27T04:47:04.713Z] Copying: 403/1024 [MB] (42 MBps) [2024-11-27T04:47:05.691Z] Copying: 442/1024 [MB] (39 MBps) [2024-11-27T04:47:06.649Z] Copying: 485/1024 [MB] (43 MBps) [2024-11-27T04:47:08.028Z] Copying: 530/1024 [MB] (44 MBps) [2024-11-27T04:47:08.964Z] Copying: 574/1024 [MB] (43 MBps) [2024-11-27T04:47:09.898Z] Copying: 616/1024 [MB] (41 MBps) [2024-11-27T04:47:10.835Z] Copying: 639/1024 [MB] (23 MBps) [2024-11-27T04:47:11.774Z] Copying: 649/1024 [MB] (10 MBps) [2024-11-27T04:47:12.710Z] Copying: 689/1024 [MB] (40 MBps) [2024-11-27T04:47:13.648Z] Copying: 722/1024 [MB] (33 MBps) [2024-11-27T04:47:15.022Z] Copying: 765/1024 [MB] (42 MBps) [2024-11-27T04:47:15.963Z] Copying: 804/1024 [MB] (39 MBps) [2024-11-27T04:47:16.906Z] Copying: 847/1024 [MB] (42 MBps) [2024-11-27T04:47:17.843Z] Copying: 888/1024 [MB] (40 MBps) [2024-11-27T04:47:18.784Z] Copying: 928/1024 [MB] (40 MBps) [2024-11-27T04:47:19.736Z] Copying: 961/1024 [MB] (32 MBps) [2024-11-27T04:47:20.678Z] Copying: 993/1024 [MB] (32 MBps) [2024-11-27T04:47:21.622Z] Copying: 1013/1024 [MB] (20 MBps) [2024-11-27T04:47:22.571Z] Copying: 1047856/1048576 [kB] (9924 kBps) [2024-11-27T04:47:22.571Z] Copying: 1024/1024 [MB] (average 34 MBps)[2024-11-27 04:47:22.401539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:25.984 [2024-11-27 04:47:22.401705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:30:25.984 [2024-11-27 04:47:22.401815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:30:25.984 [2024-11-27 04:47:22.401853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:25.984 [2024-11-27 04:47:22.404932] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:30:25.984 [2024-11-27 04:47:22.409055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:25.984 [2024-11-27 04:47:22.409175] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:30:25.984 [2024-11-27 04:47:22.409280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.918 ms 00:30:25.984 [2024-11-27 04:47:22.409389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:25.984 [2024-11-27 04:47:22.419562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:25.984 [2024-11-27 04:47:22.419614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:30:25.984 [2024-11-27 04:47:22.419630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.995 ms 00:30:25.984 [2024-11-27 04:47:22.419642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:25.984 [2024-11-27 04:47:22.419679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:25.984 [2024-11-27 04:47:22.419692] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:30:25.984 [2024-11-27 04:47:22.419706] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:30:25.984 [2024-11-27 04:47:22.419717] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:25.985 [2024-11-27 04:47:22.419797] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:25.985 [2024-11-27 04:47:22.419816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:30:25.985 [2024-11-27 04:47:22.419828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:30:25.985 [2024-11-27 04:47:22.419841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:25.985 [2024-11-27 04:47:22.419861] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:30:25.985 [2024-11-27 04:47:22.419877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 129536 / 261120 wr_cnt: 1 state: open 00:30:25.985 [2024-11-27 04:47:22.419893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:30:25.985 [2024-11-27 04:47:22.419906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:30:25.985 [2024-11-27 04:47:22.419920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:30:25.985 [2024-11-27 04:47:22.419934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:30:25.985 [2024-11-27 04:47:22.419947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:30:25.985 [2024-11-27 04:47:22.419960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:30:25.985 [2024-11-27 04:47:22.419973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:30:25.985 [2024-11-27 04:47:22.419986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:30:25.985 [2024-11-27 04:47:22.419999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:30:25.985 [2024-11-27 04:47:22.420013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:30:25.985 [2024-11-27 04:47:22.420026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:30:25.985 [2024-11-27 04:47:22.420039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:30:25.985 [2024-11-27 04:47:22.420052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:30:25.985 [2024-11-27 04:47:22.420065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:30:25.985 [2024-11-27 04:47:22.420078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:30:25.985 [2024-11-27 04:47:22.420092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:30:25.985 [2024-11-27 04:47:22.420104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:30:25.985 [2024-11-27 04:47:22.420117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:30:25.985 [2024-11-27 04:47:22.420131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:30:25.985 [2024-11-27 04:47:22.420146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:30:25.985 [2024-11-27 04:47:22.420158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:30:25.985 [2024-11-27 04:47:22.420171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:30:25.985 [2024-11-27 04:47:22.420184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:30:25.985 [2024-11-27 04:47:22.420197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:30:25.985 [2024-11-27 04:47:22.420210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:30:25.985 [2024-11-27 04:47:22.420222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:30:25.985 [2024-11-27 04:47:22.420235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:30:25.985 [2024-11-27 04:47:22.420248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:30:25.985 [2024-11-27 04:47:22.420261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:30:25.985 [2024-11-27 04:47:22.420274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:30:25.985 [2024-11-27 04:47:22.420286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:30:25.985 [2024-11-27 04:47:22.420299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:30:25.985 [2024-11-27 04:47:22.420312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:30:25.985 [2024-11-27 04:47:22.420325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:30:25.985 [2024-11-27 04:47:22.420338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:30:25.985 [2024-11-27 04:47:22.420352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:30:25.985 [2024-11-27 04:47:22.420365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:30:25.985 [2024-11-27 04:47:22.420378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:30:25.985 [2024-11-27 04:47:22.420391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:30:25.985 [2024-11-27 04:47:22.420404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:30:25.985 [2024-11-27 04:47:22.420417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:30:25.985 [2024-11-27 04:47:22.420431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:30:25.985 [2024-11-27 04:47:22.420443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:30:25.985 [2024-11-27 04:47:22.420456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:30:25.985 [2024-11-27 04:47:22.420476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:30:25.985 [2024-11-27 04:47:22.420490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:30:25.985 [2024-11-27 04:47:22.420503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:30:25.985 [2024-11-27 04:47:22.420517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:30:25.985 [2024-11-27 04:47:22.420531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:30:25.985 [2024-11-27 04:47:22.420546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:30:25.985 [2024-11-27 04:47:22.420559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:30:25.985 [2024-11-27 04:47:22.420573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:30:25.985 [2024-11-27 04:47:22.420587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:30:25.985 [2024-11-27 04:47:22.420599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:30:25.985 [2024-11-27 04:47:22.420613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:30:25.985 [2024-11-27 04:47:22.420627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:30:25.985 [2024-11-27 04:47:22.420640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:30:25.985 [2024-11-27 04:47:22.420653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:30:25.985 [2024-11-27 04:47:22.420666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:30:25.985 [2024-11-27 04:47:22.420679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:30:25.985 [2024-11-27 04:47:22.420692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:30:25.985 [2024-11-27 04:47:22.420705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:30:25.985 [2024-11-27 04:47:22.420719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:30:25.985 [2024-11-27 04:47:22.421078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:30:25.985 [2024-11-27 04:47:22.421191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:30:25.985 [2024-11-27 04:47:22.421246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:30:25.985 [2024-11-27 04:47:22.421347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:30:25.985 [2024-11-27 04:47:22.421401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:30:25.985 [2024-11-27 04:47:22.421487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:30:25.985 [2024-11-27 04:47:22.421542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:30:25.985 [2024-11-27 04:47:22.421593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:30:25.985 [2024-11-27 04:47:22.421680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:30:25.985 [2024-11-27 04:47:22.421870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:30:25.985 [2024-11-27 04:47:22.421924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:30:25.985 [2024-11-27 04:47:22.422021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:30:25.985 [2024-11-27 04:47:22.422074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:30:25.985 [2024-11-27 04:47:22.422158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:30:25.986 [2024-11-27 04:47:22.422212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:30:25.986 [2024-11-27 04:47:22.422302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:30:25.986 [2024-11-27 04:47:22.422321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:30:25.986 [2024-11-27 04:47:22.422335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:30:25.986 [2024-11-27 04:47:22.422349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:30:25.986 [2024-11-27 04:47:22.422363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:30:25.986 [2024-11-27 04:47:22.422377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:30:25.986 [2024-11-27 04:47:22.422389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:30:25.986 [2024-11-27 04:47:22.422403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:30:25.986 [2024-11-27 04:47:22.422416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:30:25.986 [2024-11-27 04:47:22.422429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:30:25.986 [2024-11-27 04:47:22.422442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:30:25.986 [2024-11-27 04:47:22.422456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:30:25.986 [2024-11-27 04:47:22.422469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:30:25.986 [2024-11-27 04:47:22.422482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:30:25.986 [2024-11-27 04:47:22.422496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:30:25.986 [2024-11-27 04:47:22.422509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:30:25.986 [2024-11-27 04:47:22.422522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:30:25.986 [2024-11-27 04:47:22.422535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:30:25.986 [2024-11-27 04:47:22.422548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:30:25.986 [2024-11-27 04:47:22.422561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:30:25.986 [2024-11-27 04:47:22.422574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:30:25.986 [2024-11-27 04:47:22.422599] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:30:25.986 [2024-11-27 04:47:22.422612] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 77665fbb-ce7f-48bd-bba3-969864502bb5 00:30:25.986 [2024-11-27 04:47:22.422626] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 129536 00:30:25.986 [2024-11-27 04:47:22.422638] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 129568 00:30:25.986 [2024-11-27 04:47:22.422650] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 129536 00:30:25.986 [2024-11-27 04:47:22.422663] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0002 00:30:25.986 [2024-11-27 04:47:22.422680] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:30:25.986 [2024-11-27 04:47:22.422693] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:30:25.986 [2024-11-27 04:47:22.422706] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:30:25.986 [2024-11-27 04:47:22.422717] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:30:25.986 [2024-11-27 04:47:22.422741] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:30:25.986 [2024-11-27 04:47:22.422755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:25.986 [2024-11-27 04:47:22.422769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:30:25.986 [2024-11-27 04:47:22.422783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.894 ms 00:30:25.986 [2024-11-27 04:47:22.422796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:25.986 [2024-11-27 04:47:22.436179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:25.986 [2024-11-27 04:47:22.436214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:30:25.986 [2024-11-27 04:47:22.436234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.354 ms 00:30:25.986 [2024-11-27 04:47:22.436246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:25.986 [2024-11-27 04:47:22.436694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:25.986 [2024-11-27 04:47:22.436738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:30:25.986 [2024-11-27 04:47:22.436754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.405 ms 00:30:25.986 [2024-11-27 04:47:22.436783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:25.986 [2024-11-27 04:47:22.469191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:25.986 [2024-11-27 04:47:22.469238] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:30:25.986 [2024-11-27 04:47:22.469254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:25.986 [2024-11-27 04:47:22.469265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:25.986 [2024-11-27 04:47:22.469344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:25.986 [2024-11-27 04:47:22.469357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:30:25.986 [2024-11-27 04:47:22.469369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:25.986 [2024-11-27 04:47:22.469380] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:25.986 [2024-11-27 04:47:22.469450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:25.986 [2024-11-27 04:47:22.469466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:30:25.986 [2024-11-27 04:47:22.469483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:25.986 [2024-11-27 04:47:22.469495] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:25.986 [2024-11-27 04:47:22.469517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:25.986 [2024-11-27 04:47:22.469531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:30:25.986 [2024-11-27 04:47:22.469544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:25.986 [2024-11-27 04:47:22.469557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:25.986 [2024-11-27 04:47:22.545492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:25.986 [2024-11-27 04:47:22.545543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:30:25.986 [2024-11-27 04:47:22.545560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:25.986 [2024-11-27 04:47:22.545570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:26.275 [2024-11-27 04:47:22.608553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:26.275 [2024-11-27 04:47:22.608745] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:30:26.275 [2024-11-27 04:47:22.608767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:26.275 [2024-11-27 04:47:22.608779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:26.275 [2024-11-27 04:47:22.608868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:26.275 [2024-11-27 04:47:22.608884] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:30:26.275 [2024-11-27 04:47:22.608898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:26.275 [2024-11-27 04:47:22.608914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:26.275 [2024-11-27 04:47:22.608979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:26.275 [2024-11-27 04:47:22.608995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:30:26.275 [2024-11-27 04:47:22.609008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:26.275 [2024-11-27 04:47:22.609019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:26.275 [2024-11-27 04:47:22.609121] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:26.275 [2024-11-27 04:47:22.609136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:30:26.275 [2024-11-27 04:47:22.609150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:26.275 [2024-11-27 04:47:22.609162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:26.275 [2024-11-27 04:47:22.609201] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:26.275 [2024-11-27 04:47:22.609214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:30:26.275 [2024-11-27 04:47:22.609227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:26.275 [2024-11-27 04:47:22.609240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:26.275 [2024-11-27 04:47:22.609285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:26.275 [2024-11-27 04:47:22.609299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:30:26.275 [2024-11-27 04:47:22.609311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:26.275 [2024-11-27 04:47:22.609323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:26.275 [2024-11-27 04:47:22.609380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:26.275 [2024-11-27 04:47:22.609396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:30:26.275 [2024-11-27 04:47:22.609409] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:26.276 [2024-11-27 04:47:22.609420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:26.276 [2024-11-27 04:47:22.609569] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 210.257 ms, result 0 00:30:27.662 00:30:27.662 00:30:27.662 04:47:23 ftl.ftl_restore_fast -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:30:27.662 [2024-11-27 04:47:23.954033] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:30:27.662 [2024-11-27 04:47:23.954168] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82284 ] 00:30:27.662 [2024-11-27 04:47:24.118509] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:27.923 [2024-11-27 04:47:24.252526] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:28.185 [2024-11-27 04:47:24.538550] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:30:28.185 [2024-11-27 04:47:24.538628] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:30:28.185 [2024-11-27 04:47:24.696067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:28.185 [2024-11-27 04:47:24.696128] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:30:28.185 [2024-11-27 04:47:24.696142] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:30:28.185 [2024-11-27 04:47:24.696151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:28.185 [2024-11-27 04:47:24.696198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:28.185 [2024-11-27 04:47:24.696210] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:30:28.185 [2024-11-27 04:47:24.696218] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:30:28.185 [2024-11-27 04:47:24.696226] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:28.185 [2024-11-27 04:47:24.696246] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:30:28.185 [2024-11-27 04:47:24.696913] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:30:28.185 [2024-11-27 04:47:24.696941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:28.185 [2024-11-27 04:47:24.696950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:30:28.185 [2024-11-27 04:47:24.696969] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.700 ms 00:30:28.185 [2024-11-27 04:47:24.696977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:28.185 [2024-11-27 04:47:24.697234] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:30:28.185 [2024-11-27 04:47:24.697260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:28.185 [2024-11-27 04:47:24.697271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:30:28.185 [2024-11-27 04:47:24.697280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:30:28.185 [2024-11-27 04:47:24.697288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:28.185 [2024-11-27 04:47:24.697330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:28.185 [2024-11-27 04:47:24.697343] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:30:28.185 [2024-11-27 04:47:24.697352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:30:28.185 [2024-11-27 04:47:24.697359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:28.185 [2024-11-27 04:47:24.697628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:28.185 [2024-11-27 04:47:24.697640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:30:28.185 [2024-11-27 04:47:24.697648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.230 ms 00:30:28.185 [2024-11-27 04:47:24.697655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:28.185 [2024-11-27 04:47:24.697780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:28.185 [2024-11-27 04:47:24.697791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:30:28.185 [2024-11-27 04:47:24.697800] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.077 ms 00:30:28.185 [2024-11-27 04:47:24.697807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:28.185 [2024-11-27 04:47:24.697829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:28.185 [2024-11-27 04:47:24.697838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:30:28.185 [2024-11-27 04:47:24.697848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:30:28.185 [2024-11-27 04:47:24.697856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:28.185 [2024-11-27 04:47:24.697875] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:30:28.185 [2024-11-27 04:47:24.701598] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:28.185 [2024-11-27 04:47:24.701630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:30:28.185 [2024-11-27 04:47:24.701640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.727 ms 00:30:28.185 [2024-11-27 04:47:24.701648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:28.185 [2024-11-27 04:47:24.701676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:28.185 [2024-11-27 04:47:24.701684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:30:28.185 [2024-11-27 04:47:24.701692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:30:28.185 [2024-11-27 04:47:24.701699] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:28.185 [2024-11-27 04:47:24.701763] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:30:28.185 [2024-11-27 04:47:24.701785] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:30:28.185 [2024-11-27 04:47:24.701831] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:30:28.185 [2024-11-27 04:47:24.701846] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:30:28.185 [2024-11-27 04:47:24.701949] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:30:28.185 [2024-11-27 04:47:24.701959] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:30:28.185 [2024-11-27 04:47:24.701969] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:30:28.185 [2024-11-27 04:47:24.701979] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:30:28.185 [2024-11-27 04:47:24.701988] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:30:28.185 [2024-11-27 04:47:24.701998] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:30:28.185 [2024-11-27 04:47:24.702005] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:30:28.185 [2024-11-27 04:47:24.702012] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:30:28.185 [2024-11-27 04:47:24.702019] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:30:28.185 [2024-11-27 04:47:24.702026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:28.185 [2024-11-27 04:47:24.702033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:30:28.185 [2024-11-27 04:47:24.702040] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.266 ms 00:30:28.185 [2024-11-27 04:47:24.702047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:28.185 [2024-11-27 04:47:24.702128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:28.185 [2024-11-27 04:47:24.702136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:30:28.185 [2024-11-27 04:47:24.702144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:30:28.185 [2024-11-27 04:47:24.702153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:28.185 [2024-11-27 04:47:24.702253] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:30:28.185 [2024-11-27 04:47:24.702264] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:30:28.185 [2024-11-27 04:47:24.702272] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:30:28.185 [2024-11-27 04:47:24.702279] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:28.185 [2024-11-27 04:47:24.702288] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:30:28.185 [2024-11-27 04:47:24.702295] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:30:28.185 [2024-11-27 04:47:24.702301] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:30:28.185 [2024-11-27 04:47:24.702308] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:30:28.185 [2024-11-27 04:47:24.702315] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:30:28.185 [2024-11-27 04:47:24.702322] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:30:28.186 [2024-11-27 04:47:24.702328] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:30:28.186 [2024-11-27 04:47:24.702334] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:30:28.186 [2024-11-27 04:47:24.702341] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:30:28.186 [2024-11-27 04:47:24.702347] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:30:28.186 [2024-11-27 04:47:24.702354] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:30:28.186 [2024-11-27 04:47:24.702366] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:28.186 [2024-11-27 04:47:24.702372] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:30:28.186 [2024-11-27 04:47:24.702379] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:30:28.186 [2024-11-27 04:47:24.702385] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:28.186 [2024-11-27 04:47:24.702392] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:30:28.186 [2024-11-27 04:47:24.702399] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:30:28.186 [2024-11-27 04:47:24.702405] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:28.186 [2024-11-27 04:47:24.702411] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:30:28.186 [2024-11-27 04:47:24.702418] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:30:28.186 [2024-11-27 04:47:24.702424] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:28.186 [2024-11-27 04:47:24.702431] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:30:28.186 [2024-11-27 04:47:24.702437] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:30:28.186 [2024-11-27 04:47:24.702443] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:28.186 [2024-11-27 04:47:24.702450] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:30:28.186 [2024-11-27 04:47:24.702456] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:30:28.186 [2024-11-27 04:47:24.702462] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:28.186 [2024-11-27 04:47:24.702468] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:30:28.186 [2024-11-27 04:47:24.702474] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:30:28.186 [2024-11-27 04:47:24.702480] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:30:28.186 [2024-11-27 04:47:24.702487] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:30:28.186 [2024-11-27 04:47:24.702494] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:30:28.186 [2024-11-27 04:47:24.702501] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:30:28.186 [2024-11-27 04:47:24.702507] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:30:28.186 [2024-11-27 04:47:24.702513] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:30:28.186 [2024-11-27 04:47:24.702520] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:28.186 [2024-11-27 04:47:24.702527] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:30:28.186 [2024-11-27 04:47:24.702533] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:30:28.186 [2024-11-27 04:47:24.702540] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:28.186 [2024-11-27 04:47:24.702546] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:30:28.186 [2024-11-27 04:47:24.702553] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:30:28.186 [2024-11-27 04:47:24.702560] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:30:28.186 [2024-11-27 04:47:24.702567] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:28.186 [2024-11-27 04:47:24.702576] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:30:28.186 [2024-11-27 04:47:24.702582] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:30:28.186 [2024-11-27 04:47:24.702589] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:30:28.186 [2024-11-27 04:47:24.702595] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:30:28.186 [2024-11-27 04:47:24.702601] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:30:28.186 [2024-11-27 04:47:24.702607] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:30:28.186 [2024-11-27 04:47:24.702614] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:30:28.186 [2024-11-27 04:47:24.702623] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:28.186 [2024-11-27 04:47:24.702632] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:30:28.186 [2024-11-27 04:47:24.702639] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:30:28.186 [2024-11-27 04:47:24.702646] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:30:28.186 [2024-11-27 04:47:24.702653] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:30:28.186 [2024-11-27 04:47:24.702661] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:30:28.186 [2024-11-27 04:47:24.702668] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:30:28.186 [2024-11-27 04:47:24.702674] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:30:28.186 [2024-11-27 04:47:24.702681] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:30:28.186 [2024-11-27 04:47:24.702688] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:30:28.186 [2024-11-27 04:47:24.702695] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:30:28.186 [2024-11-27 04:47:24.702702] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:30:28.186 [2024-11-27 04:47:24.702709] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:30:28.186 [2024-11-27 04:47:24.702716] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:30:28.186 [2024-11-27 04:47:24.702737] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:30:28.186 [2024-11-27 04:47:24.702745] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:30:28.186 [2024-11-27 04:47:24.702754] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:28.186 [2024-11-27 04:47:24.702762] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:30:28.186 [2024-11-27 04:47:24.702768] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:30:28.186 [2024-11-27 04:47:24.702776] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:30:28.186 [2024-11-27 04:47:24.702783] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:30:28.186 [2024-11-27 04:47:24.702790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:28.186 [2024-11-27 04:47:24.702797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:30:28.186 [2024-11-27 04:47:24.702804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.604 ms 00:30:28.186 [2024-11-27 04:47:24.702811] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:28.186 [2024-11-27 04:47:24.726860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:28.186 [2024-11-27 04:47:24.726895] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:30:28.186 [2024-11-27 04:47:24.726906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.006 ms 00:30:28.186 [2024-11-27 04:47:24.726913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:28.186 [2024-11-27 04:47:24.726992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:28.186 [2024-11-27 04:47:24.727000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:30:28.186 [2024-11-27 04:47:24.727011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:30:28.186 [2024-11-27 04:47:24.727018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:28.448 [2024-11-27 04:47:24.768839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:28.448 [2024-11-27 04:47:24.768883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:30:28.448 [2024-11-27 04:47:24.768896] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.770 ms 00:30:28.448 [2024-11-27 04:47:24.768904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:28.448 [2024-11-27 04:47:24.768949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:28.448 [2024-11-27 04:47:24.768968] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:30:28.448 [2024-11-27 04:47:24.768977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:30:28.448 [2024-11-27 04:47:24.768984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:28.448 [2024-11-27 04:47:24.769083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:28.448 [2024-11-27 04:47:24.769094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:30:28.448 [2024-11-27 04:47:24.769103] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:30:28.448 [2024-11-27 04:47:24.769111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:28.448 [2024-11-27 04:47:24.769226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:28.448 [2024-11-27 04:47:24.769237] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:30:28.448 [2024-11-27 04:47:24.769245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.101 ms 00:30:28.448 [2024-11-27 04:47:24.769253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:28.448 [2024-11-27 04:47:24.782845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:28.448 [2024-11-27 04:47:24.782880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:30:28.448 [2024-11-27 04:47:24.782891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.572 ms 00:30:28.448 [2024-11-27 04:47:24.782899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:28.448 [2024-11-27 04:47:24.783015] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:30:28.448 [2024-11-27 04:47:24.783028] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:30:28.448 [2024-11-27 04:47:24.783038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:28.448 [2024-11-27 04:47:24.783049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:30:28.448 [2024-11-27 04:47:24.783057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:30:28.448 [2024-11-27 04:47:24.783064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:28.448 [2024-11-27 04:47:24.795612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:28.448 [2024-11-27 04:47:24.795652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:30:28.448 [2024-11-27 04:47:24.795664] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.531 ms 00:30:28.448 [2024-11-27 04:47:24.795673] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:28.448 [2024-11-27 04:47:24.795808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:28.448 [2024-11-27 04:47:24.795819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:30:28.448 [2024-11-27 04:47:24.795828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.109 ms 00:30:28.448 [2024-11-27 04:47:24.795839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:28.448 [2024-11-27 04:47:24.795904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:28.448 [2024-11-27 04:47:24.795915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:30:28.448 [2024-11-27 04:47:24.795923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.001 ms 00:30:28.448 [2024-11-27 04:47:24.795936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:28.448 [2024-11-27 04:47:24.796506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:28.448 [2024-11-27 04:47:24.796530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:30:28.448 [2024-11-27 04:47:24.796540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.532 ms 00:30:28.448 [2024-11-27 04:47:24.796547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:28.448 [2024-11-27 04:47:24.796568] mngt/ftl_mngt_p2l.c: 169:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:30:28.448 [2024-11-27 04:47:24.796578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:28.448 [2024-11-27 04:47:24.796586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:30:28.448 [2024-11-27 04:47:24.796594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:30:28.448 [2024-11-27 04:47:24.796601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:28.448 [2024-11-27 04:47:24.808306] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:30:28.448 [2024-11-27 04:47:24.808441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:28.449 [2024-11-27 04:47:24.808456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:30:28.449 [2024-11-27 04:47:24.808466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.822 ms 00:30:28.449 [2024-11-27 04:47:24.808474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:28.449 [2024-11-27 04:47:24.810664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:28.449 [2024-11-27 04:47:24.810834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:30:28.449 [2024-11-27 04:47:24.810850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.170 ms 00:30:28.449 [2024-11-27 04:47:24.810859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:28.449 [2024-11-27 04:47:24.810927] mngt/ftl_mngt_band.c: 414:ftl_mngt_finalize_init_bands: *NOTICE*: [FTL][ftl0] SHM: band open P2L map df_id 0x2400000 00:30:28.449 [2024-11-27 04:47:24.811412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:28.449 [2024-11-27 04:47:24.811428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:30:28.449 [2024-11-27 04:47:24.811437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.502 ms 00:30:28.449 [2024-11-27 04:47:24.811444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:28.449 [2024-11-27 04:47:24.811472] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:28.449 [2024-11-27 04:47:24.811480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:30:28.449 [2024-11-27 04:47:24.811487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:30:28.449 [2024-11-27 04:47:24.811495] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:28.449 [2024-11-27 04:47:24.811525] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:30:28.449 [2024-11-27 04:47:24.811535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:28.449 [2024-11-27 04:47:24.811542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:30:28.449 [2024-11-27 04:47:24.811549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:30:28.449 [2024-11-27 04:47:24.811557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:28.449 [2024-11-27 04:47:24.835778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:28.449 [2024-11-27 04:47:24.835816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:30:28.449 [2024-11-27 04:47:24.835829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.205 ms 00:30:28.449 [2024-11-27 04:47:24.835838] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:28.449 [2024-11-27 04:47:24.835912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:28.449 [2024-11-27 04:47:24.835922] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:30:28.449 [2024-11-27 04:47:24.835931] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:30:28.449 [2024-11-27 04:47:24.835938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:28.449 [2024-11-27 04:47:24.836908] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 140.407 ms, result 0 00:30:29.833  [2024-11-27T04:47:27.360Z] Copying: 37/1024 [MB] (37 MBps) [2024-11-27T04:47:28.307Z] Copying: 55/1024 [MB] (17 MBps) [2024-11-27T04:47:29.253Z] Copying: 84/1024 [MB] (29 MBps) [2024-11-27T04:47:30.195Z] Copying: 107/1024 [MB] (23 MBps) [2024-11-27T04:47:31.151Z] Copying: 126/1024 [MB] (19 MBps) [2024-11-27T04:47:32.096Z] Copying: 137/1024 [MB] (10 MBps) [2024-11-27T04:47:33.039Z] Copying: 147/1024 [MB] (10 MBps) [2024-11-27T04:47:34.426Z] Copying: 157/1024 [MB] (10 MBps) [2024-11-27T04:47:35.371Z] Copying: 168/1024 [MB] (10 MBps) [2024-11-27T04:47:36.314Z] Copying: 182376/1048576 [kB] (10076 kBps) [2024-11-27T04:47:37.257Z] Copying: 202/1024 [MB] (24 MBps) [2024-11-27T04:47:38.204Z] Copying: 252/1024 [MB] (49 MBps) [2024-11-27T04:47:39.148Z] Copying: 300/1024 [MB] (48 MBps) [2024-11-27T04:47:40.093Z] Copying: 348/1024 [MB] (47 MBps) [2024-11-27T04:47:41.036Z] Copying: 391/1024 [MB] (42 MBps) [2024-11-27T04:47:42.423Z] Copying: 433/1024 [MB] (42 MBps) [2024-11-27T04:47:43.365Z] Copying: 481/1024 [MB] (48 MBps) [2024-11-27T04:47:44.307Z] Copying: 528/1024 [MB] (46 MBps) [2024-11-27T04:47:45.252Z] Copying: 576/1024 [MB] (48 MBps) [2024-11-27T04:47:46.196Z] Copying: 623/1024 [MB] (46 MBps) [2024-11-27T04:47:47.136Z] Copying: 672/1024 [MB] (48 MBps) [2024-11-27T04:47:48.078Z] Copying: 721/1024 [MB] (48 MBps) [2024-11-27T04:47:49.459Z] Copying: 770/1024 [MB] (49 MBps) [2024-11-27T04:47:50.027Z] Copying: 822/1024 [MB] (52 MBps) [2024-11-27T04:47:51.411Z] Copying: 868/1024 [MB] (46 MBps) [2024-11-27T04:47:52.356Z] Copying: 910/1024 [MB] (42 MBps) [2024-11-27T04:47:53.299Z] Copying: 948/1024 [MB] (37 MBps) [2024-11-27T04:47:54.242Z] Copying: 980/1024 [MB] (32 MBps) [2024-11-27T04:47:54.813Z] Copying: 1009/1024 [MB] (28 MBps) [2024-11-27T04:47:55.073Z] Copying: 1024/1024 [MB] (average 34 MBps)[2024-11-27 04:47:55.021040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:58.486 [2024-11-27 04:47:55.021104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:30:58.486 [2024-11-27 04:47:55.021118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:30:58.486 [2024-11-27 04:47:55.021127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:58.486 [2024-11-27 04:47:55.021149] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:30:58.486 [2024-11-27 04:47:55.024601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:58.487 [2024-11-27 04:47:55.024788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:30:58.487 [2024-11-27 04:47:55.024806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.437 ms 00:30:58.487 [2024-11-27 04:47:55.024820] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:58.487 [2024-11-27 04:47:55.025057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:58.487 [2024-11-27 04:47:55.025068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:30:58.487 [2024-11-27 04:47:55.025077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.208 ms 00:30:58.487 [2024-11-27 04:47:55.025084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:58.487 [2024-11-27 04:47:55.025111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:58.487 [2024-11-27 04:47:55.025119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:30:58.487 [2024-11-27 04:47:55.025127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:30:58.487 [2024-11-27 04:47:55.025134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:58.487 [2024-11-27 04:47:55.025181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:58.487 [2024-11-27 04:47:55.025192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:30:58.487 [2024-11-27 04:47:55.025200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:30:58.487 [2024-11-27 04:47:55.025208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:58.487 [2024-11-27 04:47:55.025221] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:30:58.487 [2024-11-27 04:47:55.025232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 131072 / 261120 wr_cnt: 1 state: open 00:30:58.487 [2024-11-27 04:47:55.025241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:30:58.487 [2024-11-27 04:47:55.025249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:30:58.487 [2024-11-27 04:47:55.025256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:30:58.487 [2024-11-27 04:47:55.025264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:30:58.487 [2024-11-27 04:47:55.025271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:30:58.487 [2024-11-27 04:47:55.025278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:30:58.487 [2024-11-27 04:47:55.025286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:30:58.487 [2024-11-27 04:47:55.025293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:30:58.487 [2024-11-27 04:47:55.025300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:30:58.487 [2024-11-27 04:47:55.025307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:30:58.487 [2024-11-27 04:47:55.025314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:30:58.487 [2024-11-27 04:47:55.025322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:30:58.487 [2024-11-27 04:47:55.025329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:30:58.487 [2024-11-27 04:47:55.025336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:30:58.487 [2024-11-27 04:47:55.025344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:30:58.487 [2024-11-27 04:47:55.025351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:30:58.487 [2024-11-27 04:47:55.025358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:30:58.487 [2024-11-27 04:47:55.025367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:30:58.487 [2024-11-27 04:47:55.025374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:30:58.487 [2024-11-27 04:47:55.025382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:30:58.487 [2024-11-27 04:47:55.025391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:30:58.487 [2024-11-27 04:47:55.025398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:30:58.487 [2024-11-27 04:47:55.025406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:30:58.487 [2024-11-27 04:47:55.025413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:30:58.487 [2024-11-27 04:47:55.025420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:30:58.487 [2024-11-27 04:47:55.025428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:30:58.487 [2024-11-27 04:47:55.025435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:30:58.487 [2024-11-27 04:47:55.025442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:30:58.487 [2024-11-27 04:47:55.025449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:30:58.487 [2024-11-27 04:47:55.025457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:30:58.487 [2024-11-27 04:47:55.025465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:30:58.487 [2024-11-27 04:47:55.025472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:30:58.487 [2024-11-27 04:47:55.025480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:30:58.487 [2024-11-27 04:47:55.025487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:30:58.487 [2024-11-27 04:47:55.025494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:30:58.487 [2024-11-27 04:47:55.025502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:30:58.487 [2024-11-27 04:47:55.025509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:30:58.487 [2024-11-27 04:47:55.025516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:30:58.487 [2024-11-27 04:47:55.025523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:30:58.487 [2024-11-27 04:47:55.025530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:30:58.487 [2024-11-27 04:47:55.025538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:30:58.487 [2024-11-27 04:47:55.025545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:30:58.487 [2024-11-27 04:47:55.025552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:30:58.487 [2024-11-27 04:47:55.025560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:30:58.487 [2024-11-27 04:47:55.025574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:30:58.487 [2024-11-27 04:47:55.025581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:30:58.487 [2024-11-27 04:47:55.025589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:30:58.487 [2024-11-27 04:47:55.025596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:30:58.487 [2024-11-27 04:47:55.025603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:30:58.487 [2024-11-27 04:47:55.025611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:30:58.487 [2024-11-27 04:47:55.025618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:30:58.487 [2024-11-27 04:47:55.025625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:30:58.487 [2024-11-27 04:47:55.025633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:30:58.487 [2024-11-27 04:47:55.025640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:30:58.487 [2024-11-27 04:47:55.025647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:30:58.487 [2024-11-27 04:47:55.025655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:30:58.487 [2024-11-27 04:47:55.025662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:30:58.487 [2024-11-27 04:47:55.025669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:30:58.487 [2024-11-27 04:47:55.025677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:30:58.487 [2024-11-27 04:47:55.025684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:30:58.487 [2024-11-27 04:47:55.025691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:30:58.487 [2024-11-27 04:47:55.025700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:30:58.487 [2024-11-27 04:47:55.025708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:30:58.487 [2024-11-27 04:47:55.025715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:30:58.487 [2024-11-27 04:47:55.025732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:30:58.487 [2024-11-27 04:47:55.025741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:30:58.487 [2024-11-27 04:47:55.025748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:30:58.488 [2024-11-27 04:47:55.025756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:30:58.488 [2024-11-27 04:47:55.025763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:30:58.488 [2024-11-27 04:47:55.025770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:30:58.488 [2024-11-27 04:47:55.025778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:30:58.488 [2024-11-27 04:47:55.025786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:30:58.488 [2024-11-27 04:47:55.025793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:30:58.488 [2024-11-27 04:47:55.025801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:30:58.488 [2024-11-27 04:47:55.025808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:30:58.488 [2024-11-27 04:47:55.025828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:30:58.488 [2024-11-27 04:47:55.025835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:30:58.488 [2024-11-27 04:47:55.025843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:30:58.488 [2024-11-27 04:47:55.025851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:30:58.488 [2024-11-27 04:47:55.025858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:30:58.488 [2024-11-27 04:47:55.025865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:30:58.488 [2024-11-27 04:47:55.025873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:30:58.488 [2024-11-27 04:47:55.025881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:30:58.488 [2024-11-27 04:47:55.025889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:30:58.488 [2024-11-27 04:47:55.025896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:30:58.488 [2024-11-27 04:47:55.025904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:30:58.488 [2024-11-27 04:47:55.025911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:30:58.488 [2024-11-27 04:47:55.025919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:30:58.488 [2024-11-27 04:47:55.025927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:30:58.488 [2024-11-27 04:47:55.025934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:30:58.488 [2024-11-27 04:47:55.025943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:30:58.488 [2024-11-27 04:47:55.025950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:30:58.488 [2024-11-27 04:47:55.025957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:30:58.488 [2024-11-27 04:47:55.025965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:30:58.488 [2024-11-27 04:47:55.025972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:30:58.488 [2024-11-27 04:47:55.025980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:30:58.488 [2024-11-27 04:47:55.025987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:30:58.488 [2024-11-27 04:47:55.025995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:30:58.488 [2024-11-27 04:47:55.026002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:30:58.488 [2024-11-27 04:47:55.026017] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:30:58.488 [2024-11-27 04:47:55.026025] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 77665fbb-ce7f-48bd-bba3-969864502bb5 00:30:58.488 [2024-11-27 04:47:55.026032] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 131072 00:30:58.488 [2024-11-27 04:47:55.026039] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 1568 00:30:58.488 [2024-11-27 04:47:55.026046] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 1536 00:30:58.488 [2024-11-27 04:47:55.026059] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0208 00:30:58.488 [2024-11-27 04:47:55.026066] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:30:58.488 [2024-11-27 04:47:55.026073] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:30:58.488 [2024-11-27 04:47:55.026080] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:30:58.488 [2024-11-27 04:47:55.026086] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:30:58.488 [2024-11-27 04:47:55.026093] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:30:58.488 [2024-11-27 04:47:55.026100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:58.488 [2024-11-27 04:47:55.026107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:30:58.488 [2024-11-27 04:47:55.026115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.880 ms 00:30:58.488 [2024-11-27 04:47:55.026121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:58.488 [2024-11-27 04:47:55.041533] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:58.488 [2024-11-27 04:47:55.041567] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:30:58.488 [2024-11-27 04:47:55.041584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.396 ms 00:30:58.488 [2024-11-27 04:47:55.041593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:58.488 [2024-11-27 04:47:55.042071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:58.488 [2024-11-27 04:47:55.042138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:30:58.488 [2024-11-27 04:47:55.042223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.455 ms 00:30:58.488 [2024-11-27 04:47:55.042234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:58.750 [2024-11-27 04:47:55.076364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:58.750 [2024-11-27 04:47:55.076400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:30:58.750 [2024-11-27 04:47:55.076410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:58.750 [2024-11-27 04:47:55.076417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:58.750 [2024-11-27 04:47:55.076474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:58.750 [2024-11-27 04:47:55.076482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:30:58.750 [2024-11-27 04:47:55.076489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:58.750 [2024-11-27 04:47:55.076497] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:58.750 [2024-11-27 04:47:55.076546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:58.750 [2024-11-27 04:47:55.076559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:30:58.750 [2024-11-27 04:47:55.076567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:58.750 [2024-11-27 04:47:55.076574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:58.750 [2024-11-27 04:47:55.076589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:58.750 [2024-11-27 04:47:55.076596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:30:58.750 [2024-11-27 04:47:55.076603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:58.750 [2024-11-27 04:47:55.076610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:58.750 [2024-11-27 04:47:55.154018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:58.750 [2024-11-27 04:47:55.154064] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:30:58.750 [2024-11-27 04:47:55.154076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:58.750 [2024-11-27 04:47:55.154083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:58.750 [2024-11-27 04:47:55.217625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:58.750 [2024-11-27 04:47:55.217675] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:30:58.750 [2024-11-27 04:47:55.217686] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:58.750 [2024-11-27 04:47:55.217695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:58.750 [2024-11-27 04:47:55.217781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:58.750 [2024-11-27 04:47:55.217791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:30:58.750 [2024-11-27 04:47:55.217802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:58.750 [2024-11-27 04:47:55.217809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:58.750 [2024-11-27 04:47:55.217844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:58.750 [2024-11-27 04:47:55.217853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:30:58.750 [2024-11-27 04:47:55.217860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:58.750 [2024-11-27 04:47:55.217868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:58.750 [2024-11-27 04:47:55.217937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:58.750 [2024-11-27 04:47:55.217947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:30:58.750 [2024-11-27 04:47:55.217955] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:58.750 [2024-11-27 04:47:55.217964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:58.750 [2024-11-27 04:47:55.217987] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:58.750 [2024-11-27 04:47:55.217995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:30:58.750 [2024-11-27 04:47:55.218003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:58.750 [2024-11-27 04:47:55.218011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:58.750 [2024-11-27 04:47:55.218044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:58.750 [2024-11-27 04:47:55.218052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:30:58.750 [2024-11-27 04:47:55.218060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:58.750 [2024-11-27 04:47:55.218069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:58.750 [2024-11-27 04:47:55.218106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:58.750 [2024-11-27 04:47:55.218120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:30:58.750 [2024-11-27 04:47:55.218129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:58.750 [2024-11-27 04:47:55.218136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:58.750 [2024-11-27 04:47:55.218246] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 197.178 ms, result 0 00:30:59.688 00:30:59.688 00:30:59.688 04:47:56 ftl.ftl_restore_fast -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:31:01.066 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:31:01.066 04:47:57 ftl.ftl_restore_fast -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:31:01.066 04:47:57 ftl.ftl_restore_fast -- ftl/restore.sh@85 -- # restore_kill 00:31:01.066 04:47:57 ftl.ftl_restore_fast -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:31:01.365 04:47:57 ftl.ftl_restore_fast -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:31:01.365 04:47:57 ftl.ftl_restore_fast -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:31:01.365 Process with pid 80942 is not found 00:31:01.365 Remove shared memory files 00:31:01.365 04:47:57 ftl.ftl_restore_fast -- ftl/restore.sh@32 -- # killprocess 80942 00:31:01.365 04:47:57 ftl.ftl_restore_fast -- common/autotest_common.sh@954 -- # '[' -z 80942 ']' 00:31:01.365 04:47:57 ftl.ftl_restore_fast -- common/autotest_common.sh@958 -- # kill -0 80942 00:31:01.365 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (80942) - No such process 00:31:01.365 04:47:57 ftl.ftl_restore_fast -- common/autotest_common.sh@981 -- # echo 'Process with pid 80942 is not found' 00:31:01.365 04:47:57 ftl.ftl_restore_fast -- ftl/restore.sh@33 -- # remove_shm 00:31:01.365 04:47:57 ftl.ftl_restore_fast -- ftl/common.sh@204 -- # echo Remove shared memory files 00:31:01.365 04:47:57 ftl.ftl_restore_fast -- ftl/common.sh@205 -- # rm -f rm -f 00:31:01.365 04:47:57 ftl.ftl_restore_fast -- ftl/common.sh@206 -- # rm -f rm -f /dev/hugepages/ftl_77665fbb-ce7f-48bd-bba3-969864502bb5_band_md /dev/hugepages/ftl_77665fbb-ce7f-48bd-bba3-969864502bb5_l2p_l1 /dev/hugepages/ftl_77665fbb-ce7f-48bd-bba3-969864502bb5_l2p_l2 /dev/hugepages/ftl_77665fbb-ce7f-48bd-bba3-969864502bb5_l2p_l2_ctx /dev/hugepages/ftl_77665fbb-ce7f-48bd-bba3-969864502bb5_nvc_md /dev/hugepages/ftl_77665fbb-ce7f-48bd-bba3-969864502bb5_p2l_pool /dev/hugepages/ftl_77665fbb-ce7f-48bd-bba3-969864502bb5_sb /dev/hugepages/ftl_77665fbb-ce7f-48bd-bba3-969864502bb5_sb_shm /dev/hugepages/ftl_77665fbb-ce7f-48bd-bba3-969864502bb5_trim_bitmap /dev/hugepages/ftl_77665fbb-ce7f-48bd-bba3-969864502bb5_trim_log /dev/hugepages/ftl_77665fbb-ce7f-48bd-bba3-969864502bb5_trim_md /dev/hugepages/ftl_77665fbb-ce7f-48bd-bba3-969864502bb5_vmap 00:31:01.365 04:47:57 ftl.ftl_restore_fast -- ftl/common.sh@207 -- # rm -f rm -f 00:31:01.365 04:47:57 ftl.ftl_restore_fast -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:31:01.365 04:47:57 ftl.ftl_restore_fast -- ftl/common.sh@209 -- # rm -f rm -f 00:31:01.365 ************************************ 00:31:01.365 END TEST ftl_restore_fast 00:31:01.365 ************************************ 00:31:01.365 00:31:01.365 real 2m44.513s 00:31:01.365 user 2m34.843s 00:31:01.365 sys 0m10.829s 00:31:01.365 04:47:57 ftl.ftl_restore_fast -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:01.365 04:47:57 ftl.ftl_restore_fast -- common/autotest_common.sh@10 -- # set +x 00:31:01.365 04:47:57 ftl -- ftl/ftl.sh@1 -- # at_ftl_exit 00:31:01.365 04:47:57 ftl -- ftl/ftl.sh@14 -- # killprocess 75027 00:31:01.365 Process with pid 75027 is not found 00:31:01.365 04:47:57 ftl -- common/autotest_common.sh@954 -- # '[' -z 75027 ']' 00:31:01.365 04:47:57 ftl -- common/autotest_common.sh@958 -- # kill -0 75027 00:31:01.365 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (75027) - No such process 00:31:01.365 04:47:57 ftl -- common/autotest_common.sh@981 -- # echo 'Process with pid 75027 is not found' 00:31:01.365 04:47:57 ftl -- ftl/ftl.sh@17 -- # [[ -n 0000:00:11.0 ]] 00:31:01.365 04:47:57 ftl -- ftl/ftl.sh@19 -- # spdk_tgt_pid=82655 00:31:01.365 04:47:57 ftl -- ftl/ftl.sh@20 -- # waitforlisten 82655 00:31:01.365 04:47:57 ftl -- ftl/ftl.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:31:01.365 04:47:57 ftl -- common/autotest_common.sh@835 -- # '[' -z 82655 ']' 00:31:01.365 04:47:57 ftl -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:01.365 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:01.365 04:47:57 ftl -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:01.365 04:47:57 ftl -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:01.365 04:47:57 ftl -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:01.365 04:47:57 ftl -- common/autotest_common.sh@10 -- # set +x 00:31:01.365 [2024-11-27 04:47:57.852388] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:31:01.365 [2024-11-27 04:47:57.852899] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82655 ] 00:31:01.633 [2024-11-27 04:47:58.004186] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:01.633 [2024-11-27 04:47:58.101651] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:02.199 04:47:58 ftl -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:02.199 04:47:58 ftl -- common/autotest_common.sh@868 -- # return 0 00:31:02.199 04:47:58 ftl -- ftl/ftl.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:31:02.459 nvme0n1 00:31:02.459 04:47:58 ftl -- ftl/ftl.sh@22 -- # clear_lvols 00:31:02.459 04:47:58 ftl -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:31:02.459 04:47:58 ftl -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:31:02.729 04:47:59 ftl -- ftl/common.sh@28 -- # stores=0700349e-1560-4654-a93b-b0467ee18fca 00:31:02.729 04:47:59 ftl -- ftl/common.sh@29 -- # for lvs in $stores 00:31:02.729 04:47:59 ftl -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 0700349e-1560-4654-a93b-b0467ee18fca 00:31:02.988 04:47:59 ftl -- ftl/ftl.sh@23 -- # killprocess 82655 00:31:02.988 04:47:59 ftl -- common/autotest_common.sh@954 -- # '[' -z 82655 ']' 00:31:02.988 04:47:59 ftl -- common/autotest_common.sh@958 -- # kill -0 82655 00:31:02.988 04:47:59 ftl -- common/autotest_common.sh@959 -- # uname 00:31:02.988 04:47:59 ftl -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:02.988 04:47:59 ftl -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82655 00:31:02.988 killing process with pid 82655 00:31:02.988 04:47:59 ftl -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:02.988 04:47:59 ftl -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:02.988 04:47:59 ftl -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82655' 00:31:02.988 04:47:59 ftl -- common/autotest_common.sh@973 -- # kill 82655 00:31:02.988 04:47:59 ftl -- common/autotest_common.sh@978 -- # wait 82655 00:31:04.365 04:48:00 ftl -- ftl/ftl.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:31:04.624 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:31:04.624 Waiting for block devices as requested 00:31:04.624 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:31:04.883 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:31:04.883 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:31:04.883 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:31:10.158 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:31:10.158 04:48:06 ftl -- ftl/ftl.sh@28 -- # remove_shm 00:31:10.158 04:48:06 ftl -- ftl/common.sh@204 -- # echo Remove shared memory files 00:31:10.158 Remove shared memory files 00:31:10.158 04:48:06 ftl -- ftl/common.sh@205 -- # rm -f rm -f 00:31:10.158 04:48:06 ftl -- ftl/common.sh@206 -- # rm -f rm -f 00:31:10.158 04:48:06 ftl -- ftl/common.sh@207 -- # rm -f rm -f 00:31:10.158 04:48:06 ftl -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:31:10.158 04:48:06 ftl -- ftl/common.sh@209 -- # rm -f rm -f 00:31:10.158 ************************************ 00:31:10.158 END TEST ftl 00:31:10.158 ************************************ 00:31:10.158 00:31:10.158 real 11m21.767s 00:31:10.158 user 13m44.048s 00:31:10.158 sys 1m13.516s 00:31:10.158 04:48:06 ftl -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:10.158 04:48:06 ftl -- common/autotest_common.sh@10 -- # set +x 00:31:10.158 04:48:06 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:31:10.158 04:48:06 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:31:10.158 04:48:06 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:31:10.158 04:48:06 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:31:10.158 04:48:06 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:31:10.158 04:48:06 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:31:10.158 04:48:06 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:31:10.158 04:48:06 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:31:10.158 04:48:06 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:31:10.158 04:48:06 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:31:10.158 04:48:06 -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:10.158 04:48:06 -- common/autotest_common.sh@10 -- # set +x 00:31:10.158 04:48:06 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:31:10.158 04:48:06 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:31:10.158 04:48:06 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:31:10.158 04:48:06 -- common/autotest_common.sh@10 -- # set +x 00:31:11.097 INFO: APP EXITING 00:31:11.097 INFO: killing all VMs 00:31:11.097 INFO: killing vhost app 00:31:11.097 INFO: EXIT DONE 00:31:11.357 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:31:11.617 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:31:11.617 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:31:11.617 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:31:11.876 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:31:12.136 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:31:12.397 Cleaning 00:31:12.397 Removing: /var/run/dpdk/spdk0/config 00:31:12.397 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:31:12.397 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:31:12.397 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:31:12.397 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:31:12.397 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:31:12.397 Removing: /var/run/dpdk/spdk0/hugepage_info 00:31:12.397 Removing: /var/run/dpdk/spdk0 00:31:12.397 Removing: /var/run/dpdk/spdk_pid56957 00:31:12.397 Removing: /var/run/dpdk/spdk_pid57159 00:31:12.397 Removing: /var/run/dpdk/spdk_pid57366 00:31:12.397 Removing: /var/run/dpdk/spdk_pid57459 00:31:12.397 Removing: /var/run/dpdk/spdk_pid57499 00:31:12.397 Removing: /var/run/dpdk/spdk_pid57620 00:31:12.397 Removing: /var/run/dpdk/spdk_pid57634 00:31:12.397 Removing: /var/run/dpdk/spdk_pid57827 00:31:12.397 Removing: /var/run/dpdk/spdk_pid57926 00:31:12.397 Removing: /var/run/dpdk/spdk_pid58022 00:31:12.397 Removing: /var/run/dpdk/spdk_pid58127 00:31:12.397 Removing: /var/run/dpdk/spdk_pid58219 00:31:12.397 Removing: /var/run/dpdk/spdk_pid58253 00:31:12.397 Removing: /var/run/dpdk/spdk_pid58295 00:31:12.397 Removing: /var/run/dpdk/spdk_pid58364 00:31:12.397 Removing: /var/run/dpdk/spdk_pid58466 00:31:12.397 Removing: /var/run/dpdk/spdk_pid58891 00:31:12.397 Removing: /var/run/dpdk/spdk_pid58955 00:31:12.397 Removing: /var/run/dpdk/spdk_pid59007 00:31:12.397 Removing: /var/run/dpdk/spdk_pid59023 00:31:12.397 Removing: /var/run/dpdk/spdk_pid59114 00:31:12.397 Removing: /var/run/dpdk/spdk_pid59130 00:31:12.397 Removing: /var/run/dpdk/spdk_pid59227 00:31:12.397 Removing: /var/run/dpdk/spdk_pid59243 00:31:12.397 Removing: /var/run/dpdk/spdk_pid59296 00:31:12.397 Removing: /var/run/dpdk/spdk_pid59308 00:31:12.397 Removing: /var/run/dpdk/spdk_pid59361 00:31:12.397 Removing: /var/run/dpdk/spdk_pid59379 00:31:12.397 Removing: /var/run/dpdk/spdk_pid59539 00:31:12.397 Removing: /var/run/dpdk/spdk_pid59576 00:31:12.397 Removing: /var/run/dpdk/spdk_pid59659 00:31:12.397 Removing: /var/run/dpdk/spdk_pid59828 00:31:12.397 Removing: /var/run/dpdk/spdk_pid59910 00:31:12.397 Removing: /var/run/dpdk/spdk_pid59946 00:31:12.397 Removing: /var/run/dpdk/spdk_pid60364 00:31:12.397 Removing: /var/run/dpdk/spdk_pid60462 00:31:12.397 Removing: /var/run/dpdk/spdk_pid60574 00:31:12.397 Removing: /var/run/dpdk/spdk_pid60627 00:31:12.397 Removing: /var/run/dpdk/spdk_pid60647 00:31:12.397 Removing: /var/run/dpdk/spdk_pid60731 00:31:12.397 Removing: /var/run/dpdk/spdk_pid61356 00:31:12.397 Removing: /var/run/dpdk/spdk_pid61392 00:31:12.397 Removing: /var/run/dpdk/spdk_pid61868 00:31:12.397 Removing: /var/run/dpdk/spdk_pid61961 00:31:12.397 Removing: /var/run/dpdk/spdk_pid62071 00:31:12.397 Removing: /var/run/dpdk/spdk_pid62119 00:31:12.397 Removing: /var/run/dpdk/spdk_pid62144 00:31:12.397 Removing: /var/run/dpdk/spdk_pid62170 00:31:12.397 Removing: /var/run/dpdk/spdk_pid64030 00:31:12.397 Removing: /var/run/dpdk/spdk_pid64156 00:31:12.397 Removing: /var/run/dpdk/spdk_pid64167 00:31:12.397 Removing: /var/run/dpdk/spdk_pid64185 00:31:12.397 Removing: /var/run/dpdk/spdk_pid64225 00:31:12.397 Removing: /var/run/dpdk/spdk_pid64229 00:31:12.397 Removing: /var/run/dpdk/spdk_pid64241 00:31:12.397 Removing: /var/run/dpdk/spdk_pid64286 00:31:12.397 Removing: /var/run/dpdk/spdk_pid64290 00:31:12.397 Removing: /var/run/dpdk/spdk_pid64302 00:31:12.397 Removing: /var/run/dpdk/spdk_pid64347 00:31:12.397 Removing: /var/run/dpdk/spdk_pid64351 00:31:12.397 Removing: /var/run/dpdk/spdk_pid64363 00:31:12.397 Removing: /var/run/dpdk/spdk_pid65745 00:31:12.397 Removing: /var/run/dpdk/spdk_pid65842 00:31:12.397 Removing: /var/run/dpdk/spdk_pid67243 00:31:12.397 Removing: /var/run/dpdk/spdk_pid68986 00:31:12.397 Removing: /var/run/dpdk/spdk_pid69060 00:31:12.397 Removing: /var/run/dpdk/spdk_pid69132 00:31:12.397 Removing: /var/run/dpdk/spdk_pid69234 00:31:12.397 Removing: /var/run/dpdk/spdk_pid69331 00:31:12.397 Removing: /var/run/dpdk/spdk_pid69430 00:31:12.397 Removing: /var/run/dpdk/spdk_pid69498 00:31:12.397 Removing: /var/run/dpdk/spdk_pid69575 00:31:12.397 Removing: /var/run/dpdk/spdk_pid69679 00:31:12.397 Removing: /var/run/dpdk/spdk_pid69775 00:31:12.397 Removing: /var/run/dpdk/spdk_pid69866 00:31:12.397 Removing: /var/run/dpdk/spdk_pid69935 00:31:12.397 Removing: /var/run/dpdk/spdk_pid70010 00:31:12.397 Removing: /var/run/dpdk/spdk_pid70122 00:31:12.397 Removing: /var/run/dpdk/spdk_pid70208 00:31:12.397 Removing: /var/run/dpdk/spdk_pid70309 00:31:12.397 Removing: /var/run/dpdk/spdk_pid70373 00:31:12.397 Removing: /var/run/dpdk/spdk_pid70454 00:31:12.397 Removing: /var/run/dpdk/spdk_pid70558 00:31:12.397 Removing: /var/run/dpdk/spdk_pid70650 00:31:12.397 Removing: /var/run/dpdk/spdk_pid70752 00:31:12.397 Removing: /var/run/dpdk/spdk_pid70822 00:31:12.397 Removing: /var/run/dpdk/spdk_pid70899 00:31:12.397 Removing: /var/run/dpdk/spdk_pid70979 00:31:12.658 Removing: /var/run/dpdk/spdk_pid71053 00:31:12.658 Removing: /var/run/dpdk/spdk_pid71162 00:31:12.658 Removing: /var/run/dpdk/spdk_pid71257 00:31:12.658 Removing: /var/run/dpdk/spdk_pid71347 00:31:12.658 Removing: /var/run/dpdk/spdk_pid71420 00:31:12.658 Removing: /var/run/dpdk/spdk_pid71496 00:31:12.658 Removing: /var/run/dpdk/spdk_pid71570 00:31:12.658 Removing: /var/run/dpdk/spdk_pid71644 00:31:12.658 Removing: /var/run/dpdk/spdk_pid71753 00:31:12.658 Removing: /var/run/dpdk/spdk_pid71844 00:31:12.658 Removing: /var/run/dpdk/spdk_pid71992 00:31:12.658 Removing: /var/run/dpdk/spdk_pid72267 00:31:12.658 Removing: /var/run/dpdk/spdk_pid72304 00:31:12.658 Removing: /var/run/dpdk/spdk_pid72762 00:31:12.658 Removing: /var/run/dpdk/spdk_pid72941 00:31:12.658 Removing: /var/run/dpdk/spdk_pid73038 00:31:12.658 Removing: /var/run/dpdk/spdk_pid73151 00:31:12.658 Removing: /var/run/dpdk/spdk_pid73205 00:31:12.658 Removing: /var/run/dpdk/spdk_pid73225 00:31:12.658 Removing: /var/run/dpdk/spdk_pid73547 00:31:12.658 Removing: /var/run/dpdk/spdk_pid73603 00:31:12.658 Removing: /var/run/dpdk/spdk_pid73679 00:31:12.658 Removing: /var/run/dpdk/spdk_pid74081 00:31:12.658 Removing: /var/run/dpdk/spdk_pid74227 00:31:12.658 Removing: /var/run/dpdk/spdk_pid75027 00:31:12.658 Removing: /var/run/dpdk/spdk_pid75160 00:31:12.658 Removing: /var/run/dpdk/spdk_pid75325 00:31:12.658 Removing: /var/run/dpdk/spdk_pid75411 00:31:12.658 Removing: /var/run/dpdk/spdk_pid75714 00:31:12.658 Removing: /var/run/dpdk/spdk_pid75962 00:31:12.658 Removing: /var/run/dpdk/spdk_pid76293 00:31:12.658 Removing: /var/run/dpdk/spdk_pid76475 00:31:12.658 Removing: /var/run/dpdk/spdk_pid76572 00:31:12.658 Removing: /var/run/dpdk/spdk_pid76619 00:31:12.658 Removing: /var/run/dpdk/spdk_pid76717 00:31:12.658 Removing: /var/run/dpdk/spdk_pid76739 00:31:12.658 Removing: /var/run/dpdk/spdk_pid76790 00:31:12.658 Removing: /var/run/dpdk/spdk_pid76968 00:31:12.658 Removing: /var/run/dpdk/spdk_pid77183 00:31:12.658 Removing: /var/run/dpdk/spdk_pid77456 00:31:12.658 Removing: /var/run/dpdk/spdk_pid77775 00:31:12.658 Removing: /var/run/dpdk/spdk_pid78071 00:31:12.658 Removing: /var/run/dpdk/spdk_pid78409 00:31:12.658 Removing: /var/run/dpdk/spdk_pid78540 00:31:12.658 Removing: /var/run/dpdk/spdk_pid78621 00:31:12.658 Removing: /var/run/dpdk/spdk_pid78984 00:31:12.658 Removing: /var/run/dpdk/spdk_pid79043 00:31:12.658 Removing: /var/run/dpdk/spdk_pid79337 00:31:12.658 Removing: /var/run/dpdk/spdk_pid79652 00:31:12.658 Removing: /var/run/dpdk/spdk_pid79995 00:31:12.658 Removing: /var/run/dpdk/spdk_pid80100 00:31:12.658 Removing: /var/run/dpdk/spdk_pid80142 00:31:12.658 Removing: /var/run/dpdk/spdk_pid80200 00:31:12.658 Removing: /var/run/dpdk/spdk_pid80256 00:31:12.658 Removing: /var/run/dpdk/spdk_pid80312 00:31:12.658 Removing: /var/run/dpdk/spdk_pid80493 00:31:12.658 Removing: /var/run/dpdk/spdk_pid80566 00:31:12.658 Removing: /var/run/dpdk/spdk_pid80634 00:31:12.658 Removing: /var/run/dpdk/spdk_pid80684 00:31:12.658 Removing: /var/run/dpdk/spdk_pid80723 00:31:12.658 Removing: /var/run/dpdk/spdk_pid80791 00:31:12.658 Removing: /var/run/dpdk/spdk_pid80942 00:31:12.658 Removing: /var/run/dpdk/spdk_pid81150 00:31:12.658 Removing: /var/run/dpdk/spdk_pid81391 00:31:12.658 Removing: /var/run/dpdk/spdk_pid81960 00:31:12.658 Removing: /var/run/dpdk/spdk_pid82284 00:31:12.658 Removing: /var/run/dpdk/spdk_pid82655 00:31:12.658 Clean 00:31:12.658 04:48:09 -- common/autotest_common.sh@1453 -- # return 0 00:31:12.658 04:48:09 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:31:12.658 04:48:09 -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:12.658 04:48:09 -- common/autotest_common.sh@10 -- # set +x 00:31:12.658 04:48:09 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:31:12.658 04:48:09 -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:12.658 04:48:09 -- common/autotest_common.sh@10 -- # set +x 00:31:12.658 04:48:09 -- spdk/autotest.sh@392 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:31:12.658 04:48:09 -- spdk/autotest.sh@394 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:31:12.658 04:48:09 -- spdk/autotest.sh@394 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:31:12.658 04:48:09 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:31:12.658 04:48:09 -- spdk/autotest.sh@398 -- # hostname 00:31:12.658 04:48:09 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:31:12.919 geninfo: WARNING: invalid characters removed from testname! 00:31:39.496 04:48:32 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:31:39.496 04:48:35 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:31:42.035 04:48:38 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:31:44.607 04:48:40 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:31:47.155 04:48:43 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:31:49.707 04:48:46 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:31:53.005 04:48:49 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:31:53.005 04:48:49 -- spdk/autorun.sh@1 -- $ timing_finish 00:31:53.005 04:48:49 -- common/autotest_common.sh@738 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:31:53.005 04:48:49 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:31:53.005 04:48:49 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:31:53.005 04:48:49 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:31:53.005 + [[ -n 5037 ]] 00:31:53.005 + sudo kill 5037 00:31:53.015 [Pipeline] } 00:31:53.030 [Pipeline] // timeout 00:31:53.035 [Pipeline] } 00:31:53.050 [Pipeline] // stage 00:31:53.055 [Pipeline] } 00:31:53.069 [Pipeline] // catchError 00:31:53.079 [Pipeline] stage 00:31:53.080 [Pipeline] { (Stop VM) 00:31:53.092 [Pipeline] sh 00:31:53.387 + vagrant halt 00:31:55.993 ==> default: Halting domain... 00:32:02.584 [Pipeline] sh 00:32:02.864 + vagrant destroy -f 00:32:06.226 ==> default: Removing domain... 00:32:06.240 [Pipeline] sh 00:32:06.527 + mv output /var/jenkins/workspace/nvme-vg-autotest/output 00:32:06.537 [Pipeline] } 00:32:06.552 [Pipeline] // stage 00:32:06.557 [Pipeline] } 00:32:06.572 [Pipeline] // dir 00:32:06.577 [Pipeline] } 00:32:06.592 [Pipeline] // wrap 00:32:06.598 [Pipeline] } 00:32:06.611 [Pipeline] // catchError 00:32:06.621 [Pipeline] stage 00:32:06.623 [Pipeline] { (Epilogue) 00:32:06.636 [Pipeline] sh 00:32:06.923 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:32:13.513 [Pipeline] catchError 00:32:13.515 [Pipeline] { 00:32:13.527 [Pipeline] sh 00:32:13.812 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:32:13.812 Artifacts sizes are good 00:32:13.822 [Pipeline] } 00:32:13.836 [Pipeline] // catchError 00:32:13.848 [Pipeline] archiveArtifacts 00:32:13.856 Archiving artifacts 00:32:13.976 [Pipeline] cleanWs 00:32:13.991 [WS-CLEANUP] Deleting project workspace... 00:32:13.991 [WS-CLEANUP] Deferred wipeout is used... 00:32:13.997 [WS-CLEANUP] done 00:32:13.999 [Pipeline] } 00:32:14.018 [Pipeline] // stage 00:32:14.025 [Pipeline] } 00:32:14.041 [Pipeline] // node 00:32:14.047 [Pipeline] End of Pipeline 00:32:14.102 Finished: SUCCESS